<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jodamco</title>
    <description>The latest articles on DEV Community by Jodamco (@jodamco).</description>
    <link>https://dev.to/jodamco</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jodamco"/>
    <language>en</language>
    <item>
      <title>Flutter liveness: 300% performance enhance</title>
      <dc:creator>Jodamco</dc:creator>
      <pubDate>Wed, 26 Jun 2024 12:28:51 +0000</pubDate>
      <link>https://dev.to/jodamco/flutter-liveness-300-performance-enhance-3kkh</link>
      <guid>https://dev.to/jodamco/flutter-liveness-300-performance-enhance-3kkh</guid>
      <description>&lt;p&gt;&lt;strong&gt;Stepping forward!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hello there again! I'm glad to be here sharing results and great improvements. If you're short of time and only want to know how to improve your app performance I recommend you &lt;a href="https://github.com/jodamco/gmlkit_liveness/blob/main/lib/presentation/widgets/custom_face_detector/worker.dart" rel="noopener noreferrer"&gt;peek at the code here&lt;/a&gt; and check out &lt;a href="https://dart.dev/language/isolates" rel="noopener noreferrer"&gt;this Flutter doc&lt;/a&gt; about Isolates. On the other hand, if you want to know the full story, I invite you to sit comfortably and drop your comments around. &lt;/p&gt;

&lt;h2&gt;
  
  
  The UI has changed
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/jodamco/a-journey-to-flutter-liveness-pt1-4164"&gt;last article&lt;/a&gt; from this series, we ended up with a simple screen with just a Widget live streaming the camera and a custom painter to draw a square where the face was located. I decided to enhance it a bit and provide a few things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start and Stop buttons (or anything similar) so I could start and top the process without hot-reloading the app. This was needed mainly because constant camera usage and processing consumes a lot of battery and I had to recharge my phone a few times during the development 😅.&lt;/li&gt;
&lt;li&gt;Remove painters and provide something less visually invasive.&lt;/li&gt;
&lt;li&gt;Export statistics and data from the face detection layer to the parent so it may decide what to do with it, meaning which widgets to paint or display on top of the camera live stream.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhoipbij0sds2589ijk6u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhoipbij0sds2589ijk6u.jpg" alt="Sketch of the app screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I planned it a bit (the sketch above is only here as proof) and decided to use routes to control start and stop by navigating between the &lt;strong&gt;&lt;em&gt;live data stream page&lt;/em&gt;&lt;/strong&gt; and the &lt;strong&gt;&lt;em&gt;home page&lt;/em&gt;&lt;/strong&gt;. Also, as a substitute for the square around my face, I decided to use a cube positioned with the angles captured by the face detector.&lt;/p&gt;

&lt;p&gt;With some sort of design in hand I started to code the changes and get to the result with few difficulties. The interaction between the face detector layer and its parent was made using callbacks. This made things simpler than using any other state manager with the drawback of making the parent have a specified callback (hence, no drawbacks hehehe). Once I had the data in a different layer I just needed to share it with other children. &lt;/p&gt;

&lt;p&gt;For the cube, I used some &lt;a href="https://stackoverflow.com/questions/74369892/designing-cube-in-flutter" rel="noopener noreferrer"&gt;code from stack overflow&lt;/a&gt; 🙃 (you can check the &lt;a href="https://github.com/jodamco/gmlkit_liveness/blob/main/lib/presentation/widgets/cube/cube.dart" rel="noopener noreferrer"&gt;final result here&lt;/a&gt;). I code the graph using FlChart (which I found way too complicated for nothing) and the rest of the widgets were made using Text and spacing. The new UI ended up like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hvwkbiv4i9b2clplfl0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hvwkbiv4i9b2clplfl0.png" alt="New parts of the UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Time to understand the performance-related problems of the app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance: the problem
&lt;/h2&gt;

&lt;p&gt;Just as I finished the refactor of the UI and started to test the app, I noticed I had a "latency" of 700ms to run all the needed steps to generate the data from the live stream. Initially, I had a single Isolate being spaned with &lt;code&gt;Isolate.run()&lt;/code&gt; running three main steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Parse of the ImageData from camera stream to InputImage, including the conversion from the &lt;code&gt;yuv_420_888&lt;/code&gt; format to &lt;code&gt;n21&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Instantiate the &lt;code&gt;faceDetector&lt;/code&gt; object since I had problems passing it down from the main Isolate to the spawned one.&lt;/li&gt;
&lt;li&gt;Run the detection using the generated data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This was doing great, but with this huge time I would lose too much head movement and the lack of data could compromise the decision around whether the face being detected is a person or just a photo moving around in front of the camera. Since &lt;a href="https://github.com/jodamco/gmlkit_liveness/commit/297b7054b5224a6f745dc99fb278c3839043fd62#diff-fd0b1f35b451fc5b3709009073f057f3bb1212b830c7d1ca43d96f581755802d" rel="noopener noreferrer"&gt;the first version&lt;/a&gt; had some issues I had to work them out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance: first approach
&lt;/h2&gt;

&lt;p&gt;My first thought was: split to conquer, divide the work with more isolates. With this in mind I refactored the code and the core change was  refactoring &lt;code&gt;_processImage&lt;/code&gt; into this&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

  Future&amp;lt;void&amp;gt; _runAnalysisPipeline(CameraImageMetaData metaData) async {
    if (_isBusy) return;
    _isBusy = true;

    final (analysisData) = await _processImage(metaData);
    if (analysisData != null &amp;amp;&amp;amp; widget.onAnalysisData != null) {
      widget.onAnalysisData!(analysisData);
    }

    _isBusy = false;
  }

  Future&amp;lt;(InputImage, List&amp;lt;Face&amp;gt;)?&amp;gt; _processImage(
    CameraImageMetaData metaData,
  ) async {
    final inputImage = await parseMetaData(metaData);

    if (inputImage == null ||
        inputImage.metadata?.size == null ||
        inputImage.metadata?.rotation == null) return null;

    final faceList = await runFaceDetection(inputImage);
    if (faceList.isEmpty) return null;

    return (inputImage, faceList);
  }

  Future&amp;lt;InputImage?&amp;gt; parseMetaData(CameraImageMetaData metaData) async {
    RootIsolateToken rootIsolateToken = RootIsolateToken.instance!;
    return await Isolate.run&amp;lt;InputImage?&amp;gt;(() async {
      BackgroundIsolateBinaryMessenger.ensureInitialized(rootIsolateToken);

      final inputImage = metaData.inputImageFromCameraImageMetaData(metaData);
      return inputImage;
    });
  }

  Future&amp;lt;List&amp;lt;Face&amp;gt;&amp;gt; runFaceDetection(InputImage inputImage) async {
    RootIsolateToken rootIsolateToken = RootIsolateToken.instance!;
    return await Isolate.run&amp;lt;List&amp;lt;Face&amp;gt;&amp;gt;(() async {
      BackgroundIsolateBinaryMessenger.ensureInitialized(rootIsolateToken);

      final FaceDetector faceDetector = FaceDetector(
        options: FaceDetectorOptions(
          enableContours: true,
          enableLandmarks: true,
          enableTracking: true,
          enableClassification: true
        ),
      );

      final faces = await faceDetector.processImage(inputImage);
      await faceDetector.close();
      return faces;
    });
  }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This code splits the function and uses two Isolates to handle the work. I admit I didn't think this through though. To run the detection I would need the InputImage parsed thus making me wait for it and I also didn't consider the operations related to the spawning and killing of an Isolate. I still think the refactor made it easier to read, &lt;strong&gt;but it made the 'latency' go from ~700ms to ~1000ms&lt;/strong&gt;. It got worse! &lt;/p&gt;

&lt;p&gt;To decide what to do next, I did some measurements and discovered I was spending too much time instantiating heavy objects and spawning the isolates, so I decided to get rid of them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance: long-lived Isolates!
&lt;/h3&gt;

&lt;p&gt;Here things get interesting. During my first ever usage of Isolates in this project I went through the docs and chose the simpler approach: &lt;code&gt;Isolate.run&lt;/code&gt;. This spawns the Isolate in a way you don't need to handle the communication between the main and the spawned Isolate. To solve my problem I just needed a long-lived Isolate (or worker Isolate). &lt;a href="https://dart.dev/language/isolates#robust-ports-example" rel="noopener noreferrer"&gt;This approach&lt;/a&gt; has more complexity but it allows me to create the Isolate when my &lt;code&gt;faceDetection&lt;/code&gt; layer is mounted and kill it on &lt;code&gt;dispose&lt;/code&gt;, saving me spawn time between detections. Also, I could make it in a way that I was able to instantiate the &lt;code&gt;faceDetection&lt;/code&gt; object within the isolate and save its instantiation time as well. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgp8zf75adzr3dy3m5nq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgp8zf75adzr3dy3m5nq.jpg" alt="New detection time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/jodamco/gmlkit_liveness/commit/e42f425f4d7275640a208b30eef00ebe5b32e481" rel="noopener noreferrer"&gt;These changes&lt;/a&gt; granted me an average processing time (in debug mode) of &lt;strong&gt;~340ms (an enhancement of almost 300%)&lt;/strong&gt;!!! The responsible for it is the &lt;a href="https://github.com/jodamco/gmlkit_liveness/commit/e42f425f4d7275640a208b30eef00ebe5b32e481#diff-daa402c18faa4e8307306ab780d095b3e4c064a90c155e3c4e0ab2987319bb0b" rel="noopener noreferrer"&gt;face detector worker&lt;/a&gt; that hold all the logic to spawn, maintain and use the Isolate to run the steps I described before. &lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Let's do a recap of the results we have so far:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enabled the face detection and live streaming of data&lt;/li&gt;
&lt;li&gt;Provided visual feedback of the face Euler angles and facial expressions&lt;/li&gt;
&lt;li&gt;Measurement of detection performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Regarding the performance, we have&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initial average performance of 700ms&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;em&gt;Worsening of 42% with first refactor (from 700ms to 1000ms average)&lt;/em&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Enhancement of 300% with Isolate worker approach (340ms average)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Solving the detection time bottleneck was extremely necessary once, in the end, we will be performing mathematical operations to decide whether the variation of the angles is real or not. For this to be more accurate, we need more points on time &lt;strong&gt;&lt;em&gt;during the tilt of the head&lt;/em&gt;&lt;/strong&gt; in any direction and that's exactly where the time for detection is crucial. &lt;/p&gt;

&lt;p&gt;So far, the achievements of this project are small, but the journey has been full of new concepts, techniques, and tools. Regarding the UI, the visual choices provided the opportunity to work with 3D figures, object transformation, and graphs which I didn't have the opportunity to build before using Flutter. As for the performance, to note the necessity and to be able to evolve from one approach to another was a fantastic chance to build up knowledge on how to handle the usage of Isolates with problems related to processing time. &lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;Now, I think we can start to build the liveness check. I have to re-read the '&lt;a href="https://towardsdatascience.com/implementing-liveness-detection-with-google-ml-kit-5e8c9f6dba45" rel="noopener noreferrer"&gt;base article&lt;/a&gt;' I'm using to back up this whole idea and find out what steps I need to determine if a face is a live person or a photo. I'll also try some things related to the variation of the data (if not provided in the article as well) since I got the feeling that part of the parameter is related to the variation itself. &lt;/p&gt;

&lt;p&gt;Don't be shy, let your comments out if you have any.  &lt;/p&gt;

</description>
      <category>flutter</category>
      <category>android</category>
      <category>machinelearning</category>
      <category>learning</category>
    </item>
    <item>
      <title>A journey to Flutter liveness (pt1)</title>
      <dc:creator>Jodamco</dc:creator>
      <pubDate>Tue, 18 Jun 2024 18:38:48 +0000</pubDate>
      <link>https://dev.to/jodamco/a-journey-to-flutter-liveness-pt1-4164</link>
      <guid>https://dev.to/jodamco/a-journey-to-flutter-liveness-pt1-4164</guid>
      <description>&lt;p&gt;Here we are again. This time I decided to write the posts as I go with the project, so it may or may not have an end, for sure it'll not have an order!&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Machine Learning Kit
&lt;/h2&gt;

&lt;p&gt;I was trying to decide on some Flutter side project to exercise some organizations and concepts from the framework and since AI is at hype I did some research and found out about &lt;a href="https://developers.google.com/ml-kit"&gt;Google Machine Learning kit&lt;/a&gt; which is a set of machine learning tools for different tasks such as face detection, text recognition, document digitalization, among other features (you should really check the link above). They're kinda plug and play, one can just install the plugin dependency and use the capabilities and it doesn't depend on API integrations or third-party accounts, so I decided to move on using it.&lt;/p&gt;

&lt;p&gt;For the project itself, I decided to go with liveness - oh boy, if I had done some more research before maybe I would've selected something else - because I got curious about how current tools differentiate between photographs and real people. I have to be honest and say that I didn't do a deep research on the matter and I'll follow the path of reproducing the results I found in &lt;a href="https://towardsdatascience.com/implementing-liveness-detection-with-google-ml-kit-5e8c9f6dba45"&gt;this great article&lt;/a&gt;. In it the author gets to the conclusion that &lt;br&gt;
the usage of the GMLKit for liveness is feasible and my &lt;strong&gt;first goal is to reproduce the Euler Angles graphs&lt;/strong&gt;, but using a Flutter app. I'm not sure what may be a final casual use for liveness, but I'm sure I'll learn through the process, so let's start!&lt;/p&gt;
&lt;h2&gt;
  
  
  Flutter app
&lt;/h2&gt;

&lt;p&gt;The startup of a project is always a good thing. You know, follow up the docs for the init, run a &lt;code&gt;flutter create my_app&lt;/code&gt; or do it using vscode through the command bar. I'll be using FVM to manage the Flutter version and you can &lt;a href="https://github.com/jodamco/gmlkit_liveness"&gt;check out the full code here&lt;/a&gt;. &lt;/p&gt;
&lt;h3&gt;
  
  
  Camera Layer
&lt;/h3&gt;

&lt;p&gt;First things first, I needed the camera preview set to get the image data (and to see something at least). For that, I added &lt;code&gt;camera&lt;/code&gt; and &lt;code&gt;permission_handler&lt;/code&gt; as dependencies to get access to the camera widgets. I also tried to split my camera component in a way that it would become agnostic regarding the machine learning layer, so I could reuse it in different contexts. Here's a small part of the camera widget&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class CustomCameraPreview extends StatefulWidget {
  final Function(ImageData inputImage)? onImage;
  final CustomPaint? customPaint;
  final VoidCallback? onCameraFeedReady;

  const CustomCameraPreview({
    super.key,
    this.onImage,
    this.onCameraFeedReady,
    this.customPaint,
  });

  @override
  State&amp;lt;CustomCameraPreview&amp;gt; createState() =&amp;gt; _CustomCameraPreviewState();
}

class _CustomCameraPreviewState extends State&amp;lt;CustomCameraPreview&amp;gt; {

//... more code

  Future&amp;lt;void&amp;gt; _startLiveFeed() async {
    if (selectedCamera == null) {
      setState(() {
        hasError = true;
      });
      return;
    }

    _controller = CameraController(
      selectedCamera!,
      ResolutionPreset.high,
      enableAudio: false,
      imageFormatGroup: Platform.isAndroid
          ? ImageFormatGroup.nv21
          : ImageFormatGroup.bgra8888,
    );

    await _controller?.initialize();
    _controller?.startImageStream(_onImage);

    if (widget.onCameraFeedReady != null) {
      widget.onCameraFeedReady!();
    }
  }

//... more code

Widget display() {
    if (isLoading) {
      return PreviewPlaceholder.loadingPreview();
    } else if (hasError) {
      return PreviewPlaceholder.previewError(
        onRetry: _initialize,
      );
    } else if (!hasPermissions) {
      return PreviewPlaceholder.noPermission(
        onAskForPermissions: _initialize,
      );
    } else {
      return Stack(
        fit: StackFit.expand,
        children: &amp;lt;Widget&amp;gt;[
          Center(
            child: CameraPreview(
              _controller!,
              child: widget.customPaint,
            ),
          ),
        ],
      );
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      body: display(),
    );
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I think the most important part is the startup of the camera live feed. When creating the camera controller you must set the image type using the &lt;code&gt;imageFormatGroup&lt;/code&gt; property since this is required for the mlkit plugin to work. The ones on the code above are the ones recommended for each platform and you can check it better on the &lt;a href="https://pub.dev/packages/google_mlkit_face_detection"&gt;docs of the face detection plugin&lt;/a&gt;. This widget was inspired on the &lt;a href="https://github.com/flutter-ml/google_ml_kit_flutter/blob/master/packages/example/lib/vision_detector_views/camera_view.dart#L89"&gt;example widget&lt;/a&gt; from the official example from the docs.&lt;/p&gt;

&lt;p&gt;One great thing I was able to test out was the usage of &lt;a href="https://en.wikipedia.org/wiki/Factory_(object-oriented_programming)"&gt;factories&lt;/a&gt; on widgets when I wrote the placeholder for the camera. There were other options, I was suggested to use widget extension and enums, but in the end, I was satisfied with the factory and decided to let it be since it simplified the way the parent was calling the placeholder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
enum PreviewType { permission, loading, error }

class PreviewPlaceholder extends StatelessWidget {
  final PreviewType type;
  final VoidCallback? onAction;

  const PreviewPlaceholder._({
    required this.type,
    this.onAction,
  });

  factory PreviewPlaceholder.noPermission({
    required VoidCallback onAskForPermissions,
  }) =&amp;gt;
      PreviewPlaceholder._(
        type: PreviewType.permission,
        onAction: onAskForPermissions,
      );

  factory PreviewPlaceholder.loadingPreview() =&amp;gt; const PreviewPlaceholder._(
        type: PreviewType.loading,
      );

  factory PreviewPlaceholder.previewError({required VoidCallback onRetry}) =&amp;gt;
      PreviewPlaceholder._(
        type: PreviewType.error,
        onAction: onRetry,
      );

  @override
  Widget build(BuildContext context) {
    return Column(
      mainAxisAlignment: MainAxisAlignment.center,
      children: [
        if (type == PreviewType.permission)
          ElevatedButton(
            onPressed: onAction,
            child: const Text("Ask for camera permisions"),
          ),
        if (type == PreviewType.error) ...[
          const Text("Couldn't load camera preview"),
          ElevatedButton(
            onPressed: onAction,
            child: const Text("Ask for camera permisions"),
          ),
        ],
        if (type == PreviewType.loading) ...const [
          Text("Loading preview"),
          Center(
            child: LinearProgressIndicator(),
          )
        ],
      ],
    );
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the camera layer done, let's dive into face detection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Face detection
&lt;/h3&gt;

&lt;p&gt;For the face detection, so far, I just needed to add two more dependencies: &lt;code&gt;google_mlkit_commons&lt;/code&gt; and &lt;code&gt;google_mlkit_face_detection&lt;/code&gt;. The docs of the GMLKit recommend using the specific plugin dependency for release builds instead of the Flutter GMLKit dependency.&lt;/p&gt;

&lt;p&gt;If you &lt;del&gt;copy&lt;/del&gt; write your &lt;a href="https://github.com/flutter-ml/google_ml_kit_flutter/blob/master/packages/example/lib/vision_detector_views/face_detector_view.dart"&gt;first ever approach&lt;/a&gt; to face detection, it can be very straightforward to reach data and have the results, unless by one problem: &lt;strong&gt;if you're using android and android-camerax plugin, you will not be able use the camera image with face detection&lt;/strong&gt;. This is because although you must've set &lt;code&gt;ImageFormatGroup.nv21&lt;/code&gt; as the output format, the &lt;a href="https://pub.dev/packages/camera_android_camerax/versions/0.6.5+5"&gt;current version of the flutter android-camerax&lt;/a&gt; plugin will only provide images using the &lt;code&gt;yuv_420_888&lt;/code&gt; format (you may find more info &lt;a href="https://github.com/flutter/flutter/issues/145961"&gt;here&lt;/a&gt;). The good part is that someone &lt;a href="https://blog.minhazav.dev/how-to-use-renderscript-to-convert-YUV_420_888-yuv-image-to-bitmap/#tonv21image-image-java-approach"&gt;provided a solution&lt;/a&gt; (community always rocks 🚀).&lt;/p&gt;

&lt;p&gt;I set the detection widget as my main "layer" for the detection since it does the heavy job of running the face detection from the GMLKit plugin. It ended up being a very small widget with a core function for face detection&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Future&amp;lt;void&amp;gt; _processImage(ImageData imageData) async {
    if (_isBusy) return;
    _isBusy = true;

    RootIsolateToken rootIsolateToken = RootIsolateToken.instance!;
    final analyticData = await Isolate.run&amp;lt;Map&amp;lt;String, dynamic&amp;gt;&amp;gt;(() async {
      BackgroundIsolateBinaryMessenger.ensureInitialized(rootIsolateToken);

      final inputImage = imageData.inputImageFromCameraImage(imageData);
      if (inputImage == null) return {"faces": null, "image": inputImage};

      final FaceDetector faceDetector = FaceDetector(
        options: FaceDetectorOptions(
          enableContours: true,
          enableLandmarks: true,
        ),
      );

      final faces = await faceDetector.processImage(inputImage);
      await faceDetector.close();

      return {"faces": faces, "image": inputImage};
    });

    _isBusy = false;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few comments on this function:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It is VERY SIMPLE to get the data from the GMLKit and it can be done without the Isolate. &lt;/li&gt;
&lt;li&gt;Although the isolate is not needed you might want to use it since &lt;a href="https://docs.flutter.dev/perf/best-practices#build-and-display-frames-in-16ms"&gt;Flutter code should build in 16ms&lt;/a&gt;. I was eager to try out Isolates and never had a real good reason, but without it the processing of the image would drop the framerate and the app look would become terrible. By applying the Isolate I can remove all the processing and conversion from the main event loop and guarantee that the frames will be built on time.&lt;/li&gt;
&lt;li&gt;I decided to have the face detector instantiated inside the Isolate since I had trouble passing it out from the main isolate to the new one. I also had this specific conversion &lt;code&gt;imageData.inputImageFromCameraImage(imageData)&lt;/code&gt; done inside the isolate since it is also time-consuming. This is what allows me to parse the &lt;code&gt;yuv_420_888&lt;/code&gt; format into the one needed for the GMLKit plugin. For this job, I decided that the best approach was to use a class to receive all the data from the camera and smoothly provide the &lt;code&gt;InputImage&lt;/code&gt; object for the GMLKit. You can check out the &lt;a href="https://github.com/jodamco/gmlkit_liveness/blob/main/lib/data/models/image_data.dart"&gt;class here&lt;/a&gt; and the extension for the &lt;a href="https://github.com/jodamco/gmlkit_liveness/blob/main/lib/data/models/camera_image.dart"&gt;conversion here&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;p&gt;So far I still don't have the Euler Angles in a graph as I wanted but I was able to at least get the data from the kit and paint out the bounding box of my face. I also did some tests regarding the execution time of the face detection and could see that the &lt;strong&gt;average time to execute the detection&lt;/strong&gt; for a high-quality image is &lt;strong&gt;about 600ms with a debug build&lt;/strong&gt; and &lt;strong&gt;about 380ms with a release build&lt;/strong&gt;. Since I have the Isolate the framerate of the app is running ok but I would like to enhance this performance later. &lt;/p&gt;

&lt;p&gt;My next step will be to get the Euler Angles and paint a graph with them so I can try to reproduce the comparison between photos and real people. &lt;/p&gt;

&lt;p&gt;See you there!&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>flutter</category>
      <category>android</category>
    </item>
    <item>
      <title>Controlling user auth flow with Lambda &amp; Cognito (pt2)</title>
      <dc:creator>Jodamco</dc:creator>
      <pubDate>Mon, 03 Jun 2024 23:06:11 +0000</pubDate>
      <link>https://dev.to/jodamco/controlling-user-auth-flow-with-lambda-cognito-pt2-plc</link>
      <guid>https://dev.to/jodamco/controlling-user-auth-flow-with-lambda-cognito-pt2-plc</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/jodamco/controlling-user-auth-flow-with-lambda-cognito-28k9"&gt;Last post&lt;/a&gt;, we wrote the code for our preAuth trigger that would handle the count of attempts to login. The idea now is to reset the counter after the login is successful since we want all the atempts to be available when the user come to login for another session. &lt;/p&gt;

&lt;p&gt;The code for the postAuth trigger is way simpler than the one for the preAuth. Let's dive into it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports.postAuthTrigger = async (event) =&amp;gt; {
    await this.clearLoginAttempts(event)
    return event
}

exports.clearLoginAttempts = async (event) =&amp;gt; {
    const updateParams = {
        UserAttributes: [
            {
                Name: 'custom:login_attempts',
                Value: '0',
            },
        ],
        UserPoolId: event.userPoolId,
        Username: event.userName,
    }

    await cognitoService.adminUpdateUserAttributes(updateParams).promise()
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To reset the login attempts, we just need to update the same counter we used in the preAuth trigger, which is stored into the prop 'custom:login_attempts'. We can do it by calling the &lt;em&gt;adminUpdateUserAttributes&lt;/em&gt; function from the cognito API. &lt;/p&gt;

&lt;p&gt;One other important thing to mention is that we need to return the 'event' object that we receive when the lambda is called since cognito will expect to have it to continue with the auth flow. After creating the lambdas, we need to setup the cognito accordingly with the needed property (login_attempts) and the setup for the triggers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3zhbcuw2kfcswi8nnhl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3zhbcuw2kfcswi8nnhl.png" alt="Cognito console on AWS" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First you may need to create your user pool. To do so you start by signing in to the AWS Management Console and navigating to Amazon Cognito. Click "Manage User Pools" and then "Create a user pool". Along the wizard to create the user pool you will notice an option to create custom attributes. Here you will create your 'login_attempts' attribute. You will notice that even though we set the name as 'login_attempts', cognito will ask you to access it by calling 'custom:login_attempts'. That's the default for custom attributes on cognito&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibplzeewma3iiltw6ba7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibplzeewma3iiltw6ba7.png" alt="Custom attribute on cognito wizard" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To finish the creation of the user pool, follow the steps and after review your configurations and click "Create pool."&lt;/p&gt;

&lt;p&gt;Now, you just simply need to attach the created lambdas to your cognito. Open the created user pool and search for 'User pool properties', there you'll find the lambda trigger setup. Click on 'Add lambda trigger'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qwq50f5ub953uhth02y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qwq50f5ub953uhth02y.png" alt="Lambda trigger setup" width="800" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will notice a few different types of triggers. We will use Authentication triggers. Select the Authentication option, the preAuth trigger and attach the preAuthTrigger lambda you created. Then repeat the process but to attach the postAuthTrigger lambda to postAuth trigger.&lt;/p&gt;

&lt;p&gt;And it's done! Now you have a cognito user pool setup to block user after &lt;em&gt;n&lt;/em&gt; unsuccessful attempts of login! To test it out, you may integrate cognito with some web or mobile app using the Amplify SDK. You can also use this same triggers to add other features, such as saving last login from the user or triggering other services after the login.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cognito</category>
      <category>lambda</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Controlling user auth flow with Lambda &amp; Cognito</title>
      <dc:creator>Jodamco</dc:creator>
      <pubDate>Sat, 25 May 2024 20:05:47 +0000</pubDate>
      <link>https://dev.to/jodamco/controlling-user-auth-flow-with-lambda-cognito-28k9</link>
      <guid>https://dev.to/jodamco/controlling-user-auth-flow-with-lambda-cognito-28k9</guid>
      <description>&lt;p&gt;&lt;em&gt;Disclaimer: the hero image of this post was the result of the following prompt &lt;code&gt;AWS lambda and AWS cognito logos into a Renaissance paint. Use full logos and a less known painting&lt;/code&gt;. I think I still have much to learn into AI image prompts 😅😅&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Authentication is a common topic between many kinds of systems. There are different ways to handle it and my preferred ones make usage of managed services. I found AWS Cognito a really great solution to handle authentication speacially if you are later connecting the authenticated app with other hosted services. Cognito will provide you built in ways to manage and cross validate users against services and recently I've been using it's hooks to build even more complex auth features&lt;/p&gt;

&lt;h3&gt;
  
  
  Cognito triggers
&lt;/h3&gt;

&lt;p&gt;Cognito user pools have a feature named 'Lambda triggers' which let's  you use previously created Lambdas to perform custom actions during four types of flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign up&lt;/li&gt;
&lt;li&gt;Authentication&lt;/li&gt;
&lt;li&gt;Custom authentication (such as CAPTCHA or security questions)&lt;/li&gt;
&lt;li&gt;Messaging&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each of these flows have different triggers that will execute lambda code in between specific steps of the flow. Sign up for instance has &lt;code&gt;Pre sign-up trigger&lt;/code&gt;, &lt;code&gt;Post confirmation trigger&lt;/code&gt; and &lt;code&gt;Migrate user trigger&lt;/code&gt; that can be attached to a Lambda function. &lt;/p&gt;

&lt;p&gt;To test the capacities of Lambda triggers, we will develop a system that prevents login after 5 consecutive failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coding the lambdas
&lt;/h3&gt;

&lt;p&gt;We're gonna need two lambdas to make the flow controll, one of them would take care of updating the user data so we may count how many times the user tryed login. This one will also block the user if the number of attempts exceeds the maximumm. The second one, will be used reset our counter, so in the future the user will still have the maximumm number of attempts left.&lt;/p&gt;

&lt;p&gt;The first lambda trigger would be like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports.preAuthTrigger = async (event) =&amp;gt; {
    if (!(await this.isUserEnabled(event))) throw new Error('Usuário Bloqueado')

    const attempts = await this.getLoginAttempts(event)
    if (attempts &amp;gt; 4) {
        await this.disableUser(event)
        throw new Error('Usuário Bloqueado')
    }

    await this.updateLoginAttempts(event, attempts)
    return event
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our first step is to check whether the user is already blocked by the amount of attempts. We can do it with a separate fn:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exports.isUserEnabled = async (event) =&amp;gt; {
    const getParams = {
        UserPoolId: event.userPoolId,
        Username: event.userName,
    }
    const userData = await cognitoService.adminGetUser(getParams).promise()
    return userData.Enabled
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this we are accessing the properties of the user on the cognito user pool and checking out the &lt;code&gt;Enable&lt;/code&gt; property that dictates if the user is able to user it's &lt;code&gt;username&lt;/code&gt; and &lt;code&gt;password&lt;/code&gt; to login. &lt;strong&gt;A disabled user can't login into a cognito pool&lt;/strong&gt; and that's exactly we want here.&lt;/p&gt;

&lt;p&gt;For the second step, we need to check if the number of attempts is greater than the max permitted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exports.getLoginAttempts = async (event) =&amp;gt; {
    const getParams = {
        UserPoolId: event.userPoolId,
        Username: event.userName,
    }
    const userData = await cognitoService.adminGetUser(getParams).promise()
    const attribute = userData.UserAttributes.find(
        (att) =&amp;gt; att.Name === 'custom:attempts'
    )
    if (attribute !== undefined &amp;amp;&amp;amp; attribute !== null)
        return parseInt(attribute.Value)
    else return 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is a very simmilar process to the previous fn, but now we're looking for a custom attribute named &lt;code&gt;custom:attempts&lt;/code&gt; that we will create into our user pool in the next steps. If the user has more than 5 attempts (we start counting at 0), then we should block the user. Piece of cake:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exports.disableUser = async (event) =&amp;gt; {
    await cognitoService
        .adminDisableUser({
            UserPoolId: event.userPoolId,
            Username: event.userName,
        })
        .promise()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also have to throw an Error and stop executing the lambda since this will make the login process fail as we want. Now that we are able to block the user, we just need to update the number of attempts  if it isn't blocked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exports.updateLoginAttempts = async (event, attempts) =&amp;gt; {
    const updateParams = {
        UserAttributes: [
            {
                Name: 'custom:login_attempts',
                Value: (attempts + 1).toString(),
            },
        ],
        UserPoolId: event.userPoolId,
        Username: event.userName,
    }

    await cognitoService.adminUpdateUserAttributes(updateParams).promise()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This last function sets everything for the first lambda trigger. Now we are able to perform all the actions from our main lambda function.  The final code with all functions will be like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports.preAuthTrigger = async (event) =&amp;gt; {
    if (!(await this.isUserEnabled(event))) throw new Error('Usuário Bloqueado')

    const attempts = await this.getLoginAttempts(event)
    if (attempts &amp;gt; 4) {
        await this.disableUser(event)
        throw new Error('Usuário Bloqueado')
    }

    await this.updateLoginAttempts(event, attempts)
    return event
}

exports.isUserEnabled = async (event) =&amp;gt; {
    const getParams = {
        UserPoolId: event.userPoolId,
        Username: event.userName,
    }
    const userData = await cognitoService.adminGetUser(getParams).promise()
    return userData.Enabled
}

exports.getLoginAttempts = async (event) =&amp;gt; {
    const getParams = {
        UserPoolId: event.userPoolId,
        Username: event.userName,
    }
    const userData = await cognitoService.adminGetUser(getParams).promise()
    const attribute = userData.UserAttributes.find(
        (att) =&amp;gt; att.Name === 'custom:login_attempts'
    )
    if (attribute !== undefined &amp;amp;&amp;amp; attribute !== null)
        return parseInt(attribute.Value)
    else return 0
}


exports.disableUser = async (event) =&amp;gt; {
    await cognitoService
        .adminDisableUser({
            UserPoolId: event.userPoolId,
            Username: event.userName,
        })
        .promise()
}


exports.updateLoginAttempts = async (event, attempts) =&amp;gt; {
    const updateParams = {
        UserAttributes: [
            {
                Name: 'custom:login_attempts',
                Value: (attempts + 1).toString(),
            },
        ],
        UserPoolId: event.userPoolId,
        Username: event.userName,
    }

    await cognitoService.adminUpdateUserAttributes(updateParams).promise()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my next post we will write the code for the PostAuth lambda trigger and see how can we setup cognito to use both lambdas!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cognito</category>
      <category>lambda</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Angular code obfuscation made easy</title>
      <dc:creator>Jodamco</dc:creator>
      <pubDate>Fri, 24 May 2024 16:56:17 +0000</pubDate>
      <link>https://dev.to/jodamco/angular-code-obfuscation-made-easy-4gjm</link>
      <guid>https://dev.to/jodamco/angular-code-obfuscation-made-easy-4gjm</guid>
      <description>&lt;p&gt;If you ever had to code a real-life project, the concern for security was there—or at least should have been. As technologies advance, we can code amazing, robust, high-performance systems within short time schedules, but that also means that malicious people and techniques become more powerful and tricky to overcome. That's why nowadays securing all common breaches is a must when developing systems.&lt;/p&gt;

&lt;p&gt;Angular handles a lot of security out of the box: it has its own variable protection system and sanitization to prevent malicious code from running in your app. Another feature is code minification.&lt;/p&gt;

&lt;h3&gt;
  
  
  Minification vs. Obfuscation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Code minification&lt;/strong&gt; is a technique that reduces the size of source code by removing unnecessary characters like whitespace and comments, improving the load performance of the source code. This process is common in web development for JavaScript, CSS, and HTML files and, somehow, adds a layer of security by obfuscating the code. Minified code is extremely hard to read, and that's why it is considered some sort of obfuscation. However, tools can de-minify code, making it readable and then reverse-engineerable. This is where obfuscation is useful.&lt;/p&gt;

&lt;p&gt;Complementary to &lt;strong&gt;minification&lt;/strong&gt;, &lt;strong&gt;code obfuscation&lt;/strong&gt; is a technique used to make source code difficult to understand and reverse-engineer. This is often used to protect intellectual property, prevent tampering, and deter reverse engineering by making it challenging for attackers to understand the code's logic and identify potential vulnerabilities. It transforms readable code into a more complex and obscure version without altering its functionality. Code obfuscation tools can also add dead code to mislead attackers and make it even more difficult to understand the software codebase.&lt;/p&gt;

&lt;p&gt;Well, if you have use for it, let's obfuscate our Angular app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Webpack Obfuscator
&lt;/h3&gt;

&lt;p&gt;Angular uses Webpack during its bundle phase and has its own default setup to pack the modules you develop. We are going to take advantage of this and customize the way Webpack will bundle your Angular app. First, install these packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i javascript-obfuscator webpack-obfuscator --save-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;javascript-obfuscator&lt;/code&gt; is&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;a powerful free obfuscator for JavaScript, containing a variety of features which provide protection for your source code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;while &lt;code&gt;webpack-obfuscator&lt;/code&gt; makes use of it as a plugin to provide functionality for Webpack. You can find the JavaScript obfuscator code &lt;a href="https://www.npmjs.com/package/javascript-obfuscator"&gt;here&lt;/a&gt; and Webpack obfuscator plugin &lt;a href="https://www.npmjs.com/package/webpack-obfuscator"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After that, create a &lt;code&gt;custom-webpack.config.js&lt;/code&gt; file that will contain the custom configurations we want to apply during our bundle process. Here's a simple one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var JavaScriptObfuscator = require("webpack-obfuscator");

module.exports = {
  module: {},
  plugins: [
    new JavaScriptObfuscator(
      {
        debugProtection: true,
      },
      ["vendor.js"]
    ),
  ],
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are many different config options you can provide for the &lt;code&gt;webpack-obfuscator&lt;/code&gt; plugin to fine-tune the output of the obfuscation. This is the simplest one that adds &lt;code&gt;debugProtection&lt;/code&gt; to the code, making it difficult to use the console to track down variables and functions of the app.&lt;/p&gt;

&lt;p&gt;So far, we set up our config of Webpack. Now we need to use it. We will need one more dependency:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i @angular-builders/custom-webpack --save-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will help us integrate the custom Webpack builder with Angular so we can still use the Angular build structure. After installing the package, we only need to change the &lt;code&gt;angular.json&lt;/code&gt; file. Search for the &lt;code&gt;build&lt;/code&gt; property and add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;... 
"builder": "@angular-builders/custom-webpack:browser",
"customWebpackConfig": {
    "path": "./custom-webpack.config.js",
    "replaceDuplicatePlugins": true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By replacing the &lt;code&gt;builder&lt;/code&gt; from &lt;code&gt;@angular-devkit/build-angular:browser&lt;/code&gt; to &lt;code&gt;@angular-builders/custom-webpack:browser&lt;/code&gt;, we will still be able to build for the browser but now can inject our custom Webpack configurations. The &lt;code&gt;customWebpackConfig&lt;/code&gt; property sets the reference for the file so Angular can use it.&lt;/p&gt;

&lt;p&gt;If everything is properly set, your build command should run normally and &lt;strong&gt;the result will be an obfuscated Angular app!&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Drawbacks
&lt;/h3&gt;

&lt;p&gt;Be aware, though, that this approach has a drawback on the bundle size. Code obfuscation makes it much more difficult to reverse-engineer the code, but the way it declares the variables uses more characters, leading to an increase in the size of the bundle—almost going in the opposite direction of code minification.&lt;/p&gt;

&lt;p&gt;That's it. Be sure to use it with purpose and understand how to tackle the drawbacks of the technique!&lt;/p&gt;

</description>
      <category>angular</category>
      <category>security</category>
      <category>webpack</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Coding the design: Angular &amp; user decision making flow</title>
      <dc:creator>Jodamco</dc:creator>
      <pubDate>Mon, 20 May 2024 20:23:55 +0000</pubDate>
      <link>https://dev.to/jodamco/coding-the-design-angular-user-decision-making-flow-2l1p</link>
      <guid>https://dev.to/jodamco/coding-the-design-angular-user-decision-making-flow-2l1p</guid>
      <description>&lt;p&gt;I really like design, both visual and usefull. I am a human being driven by beauty and I think that combining it with our daily lives and tasks is what differentiates greatness and uniqueness from pure raw purpose. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There's also beauty in seamless&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The effort to make something so usefull and integrated to one's life that it goes unnoticed even when life can't be imagined without it. To achieve this outcome, countless designers spend their time studying both product and user to simplify decision making processes and create balance into complex-simple user flows.&lt;/p&gt;

&lt;p&gt;That said, if you are not a designer that's mostly not for you to care, but often we face situations where we are the ones deciding the user flow on front end applications and taking care of the decision making process. That's not easy, but there's theory and that's what I want to tell you about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Seven Stages of Action
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Don_Norman" rel="noopener noreferrer"&gt;Donald Norman&lt;/a&gt; showed us in the past decades that the ones decision making process has 7 steps he called 'Seven stages of action'. Those are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Forming the goal (What do I want?)&lt;/li&gt;
&lt;li&gt;Forming the intention (How can I get it?)&lt;/li&gt;
&lt;li&gt;Specifying an action (If I do 'x' then I can get it)&lt;/li&gt;
&lt;li&gt;Executing the action (Do 'x')&lt;/li&gt;
&lt;li&gt;Perceiving the state of the world (What happened after my actions?)&lt;/li&gt;
&lt;li&gt;Interpreting the state of the world (What's the meaning of what happened?)&lt;/li&gt;
&lt;li&gt;Evaluating the outcome (Is it what I wanted?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These steps also come with the cycle of action:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxi12x34jb3xcplmuseb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxi12x34jb3xcplmuseb.png" alt="Cycle of action by David Norman"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Making it symple, whenever the user wants to do something it'll follow the same cycle of &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;evaluate the world&lt;/li&gt;
&lt;li&gt;decide what is wanted and &lt;/li&gt;
&lt;li&gt;act on it to achieve what is wanted&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The best applications we use on our daily basis have mastered techniques to diminish the cognitive load on identifying the current state and deciding what to do to achieve an outcome. This is a job itself and requires experience and research, but I came out with a simple rule of thumb to achieve great results even in small systems&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Let your user know what is going on&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Quite dumb right? I'm gonna demonstrate what I mean, so let's write a few lines of code&lt;/p&gt;

&lt;h2&gt;
  
  
  Coding user interaction
&lt;/h2&gt;

&lt;p&gt;Regardless of the techonology, we all run some sort of state management to controll user interaction, display of data and feature workflow. State is about what's going on now, the present of the application, the data it has and what's being displayed on the screen. If we want to &lt;strong&gt;let the user know what is going on&lt;/strong&gt; then we need to be clear about the component state. Usually, a simple component with data will have something between 3 to 5 states:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;idle or initial&lt;/li&gt;
&lt;li&gt;empty&lt;/li&gt;
&lt;li&gt;loading&lt;/li&gt;
&lt;li&gt;error &lt;/li&gt;
&lt;li&gt;data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's code each one of them. Consider a simple Angular component&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Component({
    selector: 'app-dummy-component',
    standalone: true,
    template: `&amp;lt;ul&amp;gt; 
                       &amp;lt;li *ngFor="let item of list"&amp;gt;
                           {{item.name}}
                       &amp;lt;/li&amp;gt;
                   &amp;lt;/ul&amp;gt;

                   &amp;lt;button (click)="loadList()" &amp;gt;
                       Load Items
                   &amp;lt;/button&amp;gt;`,
})
export class DummyComponent {
    public list: any[] = []
    constructor() {}
    public loadList(){ ... }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This would be the &lt;strong&gt;idle/initial&lt;/strong&gt; state and this also happens to be the &lt;strong&gt;data&lt;/strong&gt; state since whenever data is available it will appear. To display a list of items can be as simple as this but at the same time it is a good practice to consider&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what happens if the list is empty?&lt;/li&gt;
&lt;li&gt;what happens if I am not able to load the list?&lt;/li&gt;
&lt;li&gt;how can the user know if the load is still going on?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These questions try to fill the gaps in the decision making process of the user since whenever the cycle of action starts the component has to follow up to new states giving feedback on the actions taken by the user. Let's add some states to our component&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Component({
    selector: 'app-dummy-component',
    standalone: true,
    template: `&amp;lt;ul *ngIf="list.length &amp;gt; 0; else listPlaceholder&amp;gt; 
                       &amp;lt;li *ngFor="let item of list"&amp;gt;
                           {{item.name}}
                       &amp;lt;/li&amp;gt;
                   &amp;lt;/ul&amp;gt;

                   &amp;lt;ngTemplate #listPlaceholder&amp;gt;
                       &amp;lt;h4&amp;gt; You have no items to display &amp;lt;/h4&amp;gt;
                   &amp;lt;/ngTemplate&amp;gt;

                   &amp;lt;button (click)="loadList()" &amp;gt;
                       &amp;lt;p *ngIf="isLoading; else loadPlaceholder&amp;gt;
                           Load Items
                       &amp;lt;/p&amp;gt;
                       &amp;lt;ngTemplate #loadPlaceholder&amp;gt;
                           ... Loading items ...
                       &amp;lt;/ngTemplate&amp;gt;
                   &amp;lt;/button&amp;gt;`,
})
export class DummyComponent {
    public list: any[] = []
    public isLoading: boolean = false
    constructor() {}
    public loadList(){ 
        if(this.isLoading) return
        this.isLoading = true
        ... // load the list
        this.isLoading = false
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Our updated component is now capable of giving feedback on two new occasions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Empty state&lt;/strong&gt;: now we are able to let the user know if the list is empty. This brings clearence to what is being displayed since now we're able to differentiate an empty list from a failure of the system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Loading state&lt;/strong&gt;: displaying any kind of loader gives instant feedback for the users regarding their actions. Whenever the user acts on a screen, something must change and by having the loading feedback the user will be able perceive a change to trigger a new cycle of action.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's add some error handling to our simple component list:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Component({
    selector: 'app-dummy-component',
    standalone: true,
    template: `&amp;lt;ul *ngIf="!error &amp;amp;&amp;amp; list.length &amp;gt; 0; else listPlaceholder&amp;gt; 
                       &amp;lt;li *ngFor="let item of list"&amp;gt;
                           {{item.name}}
                       &amp;lt;/li&amp;gt;
                   &amp;lt;/ul&amp;gt;

                   &amp;lt;ngTemplate #listPlaceholder&amp;gt;
                       &amp;lt;h4&amp;gt; You have no items to display &amp;lt;/h4&amp;gt;
                   &amp;lt;/ngTemplate&amp;gt;

                   &amp;lt;h4 *ngIf="error"&amp;gt;{{error}}&amp;lt;/h4&amp;gt;

                   &amp;lt;button (click)="loadList()" &amp;gt;
                       &amp;lt;p *ngIf="isLoading; else loadPlaceholder&amp;gt;
                           Load Items
                       &amp;lt;/p&amp;gt;
                       &amp;lt;ngTemplate #loadPlaceholder&amp;gt;
                           ... Loading items ...
                       &amp;lt;/ngTemplate&amp;gt;
                   &amp;lt;/button&amp;gt;`,
})
export class DummyComponent {
    public list: any[] = []
    public isLoading: boolean = false
    public error: String|undefined = undefined
    constructor() {}
    public loadList(){
        try{
            if(this.isLoading) return
            this.isLoading = true
            ... // load the list
            this.isLoading = false
        }catch(error){
            this.error = 'Error while loading the list'
        } 
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we are also able to handle the &lt;strong&gt;error&lt;/strong&gt; state on our component. &lt;/p&gt;

&lt;p&gt;By caring for a component state and cycle of action we end up increasing it's complexity since we will need new variables and conditions to handle all the things properly, but by doing it we also achieve a more stable and robust component that's able to handle different situations accordingly. Also, the component will be able to fully respond to the user's interactions providing feedback in a fluid cycle of action. &lt;/p&gt;

&lt;p&gt;That's mostly it. Keep in mind that not all components will have all states though. Don't go for the number of states but for the clarity and always try to &lt;strong&gt;let your user know what is going on&lt;/strong&gt;. The more you care with it in your smaller components the more you whole app will be able to cover the gaps on decisory process and become seamless into your user life.&lt;/p&gt;

</description>
      <category>design</category>
      <category>ux</category>
      <category>angular</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Validating the right way: API gateway with JsonSchema</title>
      <dc:creator>Jodamco</dc:creator>
      <pubDate>Sun, 19 May 2024 19:04:34 +0000</pubDate>
      <link>https://dev.to/jodamco/validating-the-right-way-api-gateway-with-jsonschema-5f7</link>
      <guid>https://dev.to/jodamco/validating-the-right-way-api-gateway-with-jsonschema-5f7</guid>
      <description>&lt;p&gt;We all know input validation, right? Tons of ways on doing it and it saves us a lot of time preventing troubles of many types. There are different types of validation and for different use cases we may use different approachs. We may validade inputs with some requirements, custom functions and, whenever using #Angular we are able to validate things through some high level Validation function from Reactive Forms module. &lt;/p&gt;

&lt;p&gt;Well, before start with Backend i always got myself thinking about how to validate the body of a REST requisition. Don't get me wrong, I can for sure guarantee that things will be sent fine, but always wandered if the backend function receiving the data would have to parse it back and validate the same way I do when the front end receive data. Some time passed, I started to do backend and had to come up with a solution and since we used API Gateways, things were right next. &lt;/p&gt;

&lt;p&gt;When I was getting things done and understanding stuff propperly with serverless and API Gateway, I used to validate inputs like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

module.exports.myAPIPostFn = async (event) =&amp;gt; {
    const { prop1, prop2, prop3 } = event.body
    if(notNull(prop1) || notNull(prop2) || notNull(prop3))
        throw new Error('Error [400]: invalid body')

    ... // code continues...
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It doesn't sit right, you see? &lt;strong&gt;By doing that, I feel like i'm making the function responsible for it's input integrity, while it should receive things right in first place&lt;/strong&gt;. Of course that if I want to be sure no side effects would occour I could try and use some classes and models for it also, but it also didn't fit with the way things were being done. &lt;/p&gt;

&lt;p&gt;If the caller was the responsible, then I should try and find something to validate things at it's side. That's when I found the input validation with JSON Schema. As stated in its &lt;a href="https://json-schema.org/" rel="noopener noreferrer"&gt;official website&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;JSON Schema is the vocabulary that enables JSON data consistency, validity, and interoperability at scale&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;See? Just what I was looking for. It has a set of rules that one can use in a very declarative way to define what a JSON should look like. A JSON schema would look like this&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "$schema": "http://json-schema.org/draft-04/schema#",
    "type": "object",
    "required": [
        "name",
        "surName",
        "age",
        "email",
        "phone",
        "isMarried"
    ],
    "properties": {
        "name": { "type": "string" },
        "surName": { "type": "string" },
        "age": { "type": "integer" },
        "email": { "type": "string" },
        "phone": { "type": "string" },
        "isMarried": { "type": "boolean" }
    },
    "additionalProperties": false
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With this schema &lt;strong&gt;I am able to define the object's properties with name, type and also prevent additional props from comming with it&lt;/strong&gt;. The best part of it, when using serverless with API Gateway, is that the integration is as symple as including a line of code (actually, 3 if you count linke breaks):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

newClient:
    handler: src/functions/client/handler.newClient
    events:
        - http:
              path: client
              method: put
              integration: LAMBDA
              authorizer: ${self:custom.authorizerConfig}
              **request:
                  schemas:
                      application/json: ${file(schemas/new-client.json)}**


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When using serverless you can provide several configs for your event type, and when it comes to using HTTP event type you are also able to define a schema for the request. &lt;strong&gt;When the schema is defined, the serveless will setup the API in a way that it uses the provided schema to validate the body of the request being received&lt;/strong&gt;. This will ensure that when the lambda function executes, the &lt;code&gt;event.body&lt;/code&gt; will have all the desired properties and, if they don't, the API Gateway will gracefully respond with &lt;code&gt;Invalid request body&lt;/code&gt; without ever call the lambda.&lt;/p&gt;

&lt;p&gt;In the end I got quite happy with this approach since it brought me some advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cleaner code (we all love it)&lt;/li&gt;
&lt;li&gt;Better controll over contract between front and backend without changing the way the backend works (AKA not implementing models)&lt;/li&gt;
&lt;li&gt;Close one more security breach, since now I am able to prevent some malicious inputs&lt;/li&gt;
&lt;li&gt;Cost saving: I no longer have to execute lambda code to notice that inputs are wrong. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ajeyuja2firgdx60e12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ajeyuja2firgdx60e12.png" alt="Looney Toones "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it for now. Let me know in the comment section if you would have done differently. I am also looking for some simmilar tool to validate data comming from &lt;strong&gt;path and query parameters&lt;/strong&gt;, if you happen to know I would be glad to hear!&lt;/p&gt;

</description>
      <category>apigateway</category>
      <category>jsonschema</category>
      <category>javascript</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
