<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: dotnetmauiman</title>
    <description>The latest articles on DEV Community by dotnetmauiman (@dotnetmauiman).</description>
    <link>https://dev.to/dotnetmauiman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dotnetmauiman"/>
    <language>en</language>
    <item>
      <title>Digital Compositing on Mobile</title>
      <dc:creator>dotnetmauiman</dc:creator>
      <pubDate>Tue, 06 Sep 2022 01:18:45 +0000</pubDate>
      <link>https://dev.to/dotnetmauiman/digital-compositing-on-mobile-219h</link>
      <guid>https://dev.to/dotnetmauiman/digital-compositing-on-mobile-219h</guid>
      <description>&lt;p&gt;Node-based Compositing is the process of combining multiple seemingly simple nodes to render and achieve a desired result. This process has wide applications in the graphics software industry, especially in procedural image generation, motion graphics, animation, 3D modeling, and visual effects VFX.&lt;/p&gt;

&lt;p&gt;The paradigm of a node-based tool involves linking basic media objects onto a procedural map or node graph and then intuitively laying out each of the steps in a sequential progression of inputs and outputs. Any parameters from an earlier step can be modified to change the final outcome, with the results instantly being visible to you, and each of the nodes, being procedural, can be easily reused, saving time and effort.&lt;/p&gt;

&lt;p&gt;To visualize what it looks like, the user interface of a node compositing software is shown below. Videos or images are first read in and then processed by a sequence of operations before being merge into one final outcome. The sequence of operations, the node graph - Directed Acyclic Graph (DAG), is represented by a flowchart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Read media  Read media
 |           |     
 V           V     
 Premult     Color Correction
 |           |
 |           V
 |           Transform
 |           |
 |           V
 |           Gaussian Blur
 |           |
 V           V
     Merge
       |
       V
     Viewer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the number of nodes increases, the node graph, together with a large monitor screen, provides an overview of what is happening while also enabling one to focus on a particular progression step (node) and make changes to that step to achieve the end results.&lt;/p&gt;

&lt;p&gt;The Small Screen Problem on mobile&lt;br&gt;
On a mobile device, a pure node-based app is not common. Resource constraint is one limiting issue. Putting that aside, since mobile has come a long way, with mobile chips enabling one to do many advanced image and video capabilities, the other limiting issue is the small screen size of a mobile device for displaying the node-based user interface.&lt;/p&gt;

&lt;p&gt;As a start, the arrangement of the multiple node properties screen, the node graph, the dope sheet, and the curve editor together in one small screen presents a problem. Compositing a node graph with a flowchart-like user interface is also more difficult especially when trying to connect nodes while panning the small screen frequently. This is made worst when you need to connect to a distant node with multiple inputs; or if you need to link node properties, which may involve the timeline, for tracking, animating objects, or rotoscoping.&lt;/p&gt;
&lt;h2&gt;
  
  
  Main Idea
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;Node Pipeline&lt;/strong&gt; is proposed to represent the node graph on a mobile device instead of using a flowchart-like user interface.&lt;/p&gt;
&lt;h2&gt;
  
  
  Node Pipeline for Node Graph
&lt;/h2&gt;

&lt;p&gt;A node graph is commonly represented as a flowchart with a Directed Acyclic Graph (DAG) in most digital compositing software. This is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Read media  Read media
 |           |     
 V           V     
 Premult     Color Correction
 |           |
 |           V
 |           Transform
 |           |
 |           V
 |           Gaussian Blur
 |           |
 V           V
     Merge
       |
       V
     Viewer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a small screen mobile device, the DAG could be decomposed into multiple series of nodes shown below.&lt;/p&gt;

&lt;p&gt;For example, the node graph above can be represented as a simple list below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Read media
 |                
 V               
 Premult     


 Read media
 |              
 V             
 Color Correction
 |           
 V
 Transform
 |           
 V          
 Gaussian Blur      


 Merge
 |
 V
 Viewer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the break from the first series after the 'Premult' node and the second break after the 'Gaussian Blur' node. From here, we will refer to the above arrangement as a &lt;strong&gt;pipeline **(or **Node Pipeline&lt;/strong&gt; with breaks).&lt;/p&gt;

&lt;p&gt;This arrangement has the obvious advantage of being represented easily on a mobile device with a simple list as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Read media         &amp;gt;
2. Premult            &amp;gt;

3. Read media         &amp;gt;
4. Color Correction   &amp;gt;
5. Transform          &amp;gt;
6. Blur               &amp;gt;

7. Merge 2,6          &amp;gt;
8. Viewer             &amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The lines joining the nodes could be further represented by listing the input nodes used, like using a spreadsheet formula (e.g., =SUM(A1,A2)). In the above, the 'Premult' node uses node 1 (Read media) as the input, while the 'Merge' node uses node 2 and 6. We can also represent this information on the pipeline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Read media       (input none)  &amp;gt;
2. Premult          (input 1)     &amp;gt;

3. Read media       (input none)  &amp;gt;
4. Color Correction (input 3)     &amp;gt;
5. Transform        (input 4)     &amp;gt;
6. Gaussian Blur    (input 5)     &amp;gt;

7. Merge            (input 2,6)   &amp;gt;
8. Viewer           (input 7)     &amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simplifying it further&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Read media       (none)  &amp;gt;
2. Premult          (1)     &amp;gt;

3. Read media       (none)  &amp;gt;
4. Color Correction (3)     &amp;gt;
5. Transform        (4)     &amp;gt;
6. Gaussian Blur    (5)     &amp;gt;

7. Merge            (2,6)   &amp;gt;
8. Viewer           (7)     &amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Implications
&lt;/h2&gt;

&lt;p&gt;With the above, let's think a little further about the implications if we use a node pipeline as the user interface for managing a node graph on a small-screen mobile device.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A node pipeline will require a shift in thinking when you are using it to manage the node graph. Instead of three parallel series of nodes in the node graph above, a pipeline has three sequences (or series) of nodes. The lines joining nodes in a node graph are now represented by the target node referencing input nodes. For example, node 2 references node 1 (as the input). Node 7 references node 2 and 6.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It should be noted that, mathematically, a node graph, which is a Directed Acyclic Graph, can be decomposed to a node pipeline(the series of nodes above) without any loss of information. This means a pipeline user interface can be as flexible as a flowchart-like node graph.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A pipeline makes it easy to navigate the node graph on a small screen as you can quickly scroll up and down the list. Tapping on any item in the list (a node) can further bring up a screen for changing node properties. This user interface should be familiar to many mobile users as it is similar to the user interface for managing mobile phone 'Settings'.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The contextual overload of a user frequently panning a flowchart-like node graph on a small screen is significantly reduced. This will improve the usability and ease of managing a node graph on a small screen.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The referencing model for linking nodes on a pipeline is similar to using a spreadsheet. 'Merge' below is like a formula applying to cell 2 and 6.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;7. Merge (2,6) &amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pipeline approach leads to other interesting productivity gains in compositing on a mobile device. Please check out our open source &lt;a href="https://github.com/Misfits-Rebels-Outcasts/Nodef"&gt;Nodef&lt;/a&gt; repo on Github or &lt;a href="https://apps.apple.com/us/app/nodef-photo-filters-effects/id1640788489"&gt;Nodef app&lt;/a&gt; on the app store for the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto Chaining &amp;amp; Reverse Compositing&lt;/li&gt;
&lt;li&gt;Viewer Cycling&lt;/li&gt;
&lt;li&gt;Directed Acyclic Graph (DAG) Generation/Import&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the above, we will address how to join nodes by Auto Chaining and Reverse Compositing, ponder further on linking node properties, and also about setting the Viewer to a selected intermediate node on the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aTaeP4lx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2pi2xznz3gc8ki7gg1xr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aTaeP4lx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2pi2xznz3gc8ki7gg1xr.png" alt="Image description" width="880" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mobile</category>
      <category>opensource</category>
      <category>ux</category>
      <category>swift</category>
    </item>
    <item>
      <title>From Chaining/Blending to Compositing Core Image CIFilter</title>
      <dc:creator>dotnetmauiman</dc:creator>
      <pubDate>Tue, 06 Sep 2022 01:07:41 +0000</pubDate>
      <link>https://dev.to/dotnetmauiman/from-chainingblending-to-compositing-core-image-cifilter-44lb</link>
      <guid>https://dev.to/dotnetmauiman/from-chainingblending-to-compositing-core-image-cifilter-44lb</guid>
      <description>&lt;p&gt;Core Image is a powerful iOS framework that makes hardware-accelerated image manipulation easy. Oftentimes, as developers, we use it to add graphical effects to images in our app. The typical process involves choosing a right CIFilter, setting parameters, and applying the filter to an image.&lt;/p&gt;

&lt;p&gt;Sometimes, the process may involve chaining several filters to get the desired result. For example, we can use a CIColorControls filter to first adjust the saturation of an image followed by applying a Vignette filter to add a dark fading border around the edges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background for a wrapper Library
&lt;/h2&gt;

&lt;p&gt;Core Image CIFilter by itself is already a full fledged library that we are able to use independently. Furthermore, we often find ourselves referring to the open-source Filterpedia project by Simon Gladman for implementation details. With these, we should already have everything that we need. So, why bother with another library?&lt;/p&gt;

&lt;p&gt;There may be times when we need to go beyond just chaining a few CIFilter. We may need to chain filters, blend the output with another image, and then apply more filters to achieve the desired result. Or in other words, we may need to apply some kind of node graph to get the effect we wanted. As developers, we often find ourselves in this programmatical chaining and blending process repeatedly.&lt;/p&gt;

&lt;p&gt;For example, a CIEdgeWork filter produces a stylized black-and-white rendition of an image that looks similar to a woodblock cutout. The output of this filter, however, requires a background image to visualize. This requires a composite filter, CISourceAtopCompositing, to place the output of CIEdgeWork over a constant color background (CIConstantColor).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xkL97cLK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aj990p12rir49n1ndcm5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xkL97cLK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aj990p12rir49n1ndcm5.jpg" alt="CIFilter" width="880" height="1153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Beyond this, we may want to twirl the output, and then further apply an Addition composition/blend with another image. We often need to test this out in programming code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nodef library
&lt;/h2&gt;

&lt;p&gt;The Nodef library is a very simple wrapper for applying a node graph (of CIFilter) on an image. The library is the same library used in the open-source Nodef app that that reimagines node-based compositing on a mobile device with an innovative Node Pipeline. The app enables us to perform many of the compositing behavior we desire without changing a single line of code.&lt;/p&gt;

&lt;p&gt;Besides providing a library for node-based compositing, the library also provides default values for each of the different filters and implements the 'Codable' protocol for saving and loading a node graph in JSON. Using the JSON file, we can be creatively compositing on a mobile device, saving the composite as a file and then loading it on our desktop computer with an application or command line tool.&lt;/p&gt;

&lt;p&gt;Happy Compositing on Mobile!&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a CIColorMonochrome filter
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func createOneFilter(_ inputImage: UIImage) -&amp;gt; UIImage {

    let filters = FiltersX()
    filters.add(filterHolder: filters.getFilterWithHolder("Color Monochrome"))
    return filters.applyFilters(image: inputImage)

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f_V9V5Iv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ya9298vo67ur2h4v198.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f_V9V5Iv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ya9298vo67ur2h4v198.jpg" alt="chainfilters" width="880" height="1153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Chain a CISepiaTone filter and CIZoomBlur filter
&lt;/h2&gt;

&lt;p&gt;CISepiaTone takes the original image as the input image and CIZoomBlur takes the output of CISepiaTone as the input image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func chainFilters(_ inputImage: UIImage) -&amp;gt; UIImage {

    let filters = FiltersX()
    filters.add(filterHolder: filters.getFilterWithHolder("Sepia Tone"))
    filters.add(filterHolder: filters.getFilterWithHolder("Zoom Blur"))
    return filters.applyFilters(image: inputImage)

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Apply a CIDotScreen filter and then CISubtractBlendMode it with the original image
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func blendFilters(_ inputImage: UIImage) -&amp;gt; UIImage {

    let filters = FiltersX()

    filters.add(filterHolder: filters.getFilterWithHolder("Dot Screen"))
    filters.add(filterHolder: filters.getFilterWithHolder("Subtract Blend Mode"))

    return filters.applyFilters(image: inputImage)

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CISubtractBlendMode takes the output of CIDotScreen as the inputImage and the original image as the backgroundImage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Lu-srcy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erb5rui5f8cl8kn0g0mu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Lu-srcy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erb5rui5f8cl8kn0g0mu.jpg" alt="Blending" width="880" height="1153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a CICheckboardGenerator filter
&lt;/h2&gt;

&lt;p&gt;CICheckboardGenerator requires no input Image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func generatorFilters(_ inputImage: UIImage) -&amp;gt; UIImage {

    let filters = FiltersX()

    filters.size=CGSize(width:inputImage.size.width, height:inputImage.size.height)
    filters.add(filterHolder: filters.getFilterWithHolder("Checkerboard Generator"))

    return filters.applyFilters(image: inputImage)

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Changing CIFilter properties through the wrapper class
&lt;/h2&gt;

&lt;p&gt;Changing the width of CICheckboardGenerator.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func filterProperties(_ inputImage: UIImage) -&amp;gt; UIImage {

    let filters = FiltersX()
    filters.size=CGSize(width:inputImage.size.width, height:inputImage.size.height)

    let fxHolder=filters.getFilterWithHolder("Checkerboard Generator")
    (fxHolder.filter as! CheckerboardGeneratorFX).width = 500
    filters.add(filterHolder: fxHolder)

    return filters.applyFilters(image: inputImage)

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pg60tPAf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1u2zv0vicitf4u9v88mj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pg60tPAf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1u2zv0vicitf4u9v88mj.jpg" alt="Checkerboard" width="880" height="1153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Compositing CIFilter
&lt;/h2&gt;

&lt;p&gt;Apply a CIMultiplyBlendMode on the CIColorMonochrome version of the original image with a CICheckboardGenerator.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func compositingFilters(_ inputImage: UIImage) -&amp;gt; UIImage {

    let filters = FiltersX()

    filters.size=CGSize(width:inputImage.size.width, height:inputImage.size.height)

    filters.add(filterHolder: filters.getFilterWithHolder("Color Monochrome")) //Node 1
    filters.add(filterHolder: filters.getFilterWithHolder("Checkerboard Generator")) //Node 2

    let fxHolder=filters.getFilterWithHolder("Multiply Blend Mode")
    (fxHolder.filter as! MultiplyBlendModeFX).inputImageAlias = "2"
    (fxHolder.filter as! MultiplyBlendModeFX).backgroundImageAlias = "1"
    filters.add(filterHolder: fxHolder)

    return filters.applyFilters(image: inputImage)

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8RZa4U88--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kdqztyl20bdjbnpvudj5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8RZa4U88--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kdqztyl20bdjbnpvudj5.jpg" alt="Compositing" width="880" height="1153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Node Graph Compositing CIFilters
&lt;/h2&gt;

&lt;p&gt;First, chain the original image with the following filters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Original Image -&amp;gt; CILineScreen -&amp;gt; CIColorMonochrome
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, generate a checkboard and then apply a triangle tile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CheckboardGenerator -&amp;gt; CITriangleTile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, multiply blend the output of the color monochrome with the output of the triangle tile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CIMultiplyBlendMode on CIColorMonochrome and CITriangleTile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kksJ0Yz---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/su5sv2rdxd2dge8yk20y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kksJ0Yz---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/su5sv2rdxd2dge8yk20y.jpg" alt="Image description" width="880" height="1153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Swift code for the node graph above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func nodeGraphFilters(_ inputImage: UIImage) -&amp;gt; UIImage {

    let filters = FiltersX()

    filters.size=CGSize(width:inputImage.size.width, height:inputImage.size.height)

    filters.add(filterHolder: filters.getFilterWithHolder("Line Screen")) //Node 1
    filters.add(filterHolder: filters.getFilterWithHolder("Color Monochrome")) //Node 2
    filters.add(filterHolder: filters.getFilterWithHolder("Checkerboard Generator")) //Node 3
    filters.add(filterHolder: filters.getFilterWithHolder("Triangle Tile")) //Node 4

    let fxHolder=filters.getFilterWithHolder("Multiply Blend Mode")
    (fxHolder.filter as! MultiplyBlendModeFX).inputImageAlias = "4"
    (fxHolder.filter as! MultiplyBlendModeFX).backgroundImageAlias = "2"
    filters.add(filterHolder: fxHolder)

    return filters.applyFilters(image: inputImage)

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Saving the Node Graph as a JSON string
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    let filters = FiltersX()
    filters.add(filterHolder: filters.getFilterWithHolder("Color Controls"))
    filters.add(filterHolder: filters.getFilterWithHolder("Sepia Tone"))
    filters.add(filterHolder: filters.getFilterWithHolder("Zoom Blur"))
    pageSettings.filters = filters

    let encoder=JSONEncoder()
    encoder.outputFormatting = .prettyPrinted

    let pageSettingsData = (try? encoder.encode(pageSettings))!
    let pageSettingsDataStr = String(data: pageSettingsData, encoding: .utf8)!

    var jsonObject: [String: String] = [String: String]()
    var savedJSONStr = ""
    jsonObject["page_settings"]=pageSettingsDataStr


    if let jsonData = try? encoder.encode(jsonObject) {
        if let jsonString = String(data: jsonData, encoding: .utf8) {

            var jsonLabel: [String: String] = [String: String]()
            jsonLabel["nodef"]=jsonString

            if let jsonLabelData = try? encoder.encode(jsonLabel) {
                if let jsonLabelString = String(data: jsonLabelData, encoding: .utf8) {
                    savedJSONStr=jsonLabelString
                }
            }
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Loading the JSON String for initializing the node graph
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    if let data = savedJSONStr.data(using: .utf8) {
                let labelDictionary : [String: Any] = (try? JSONSerialization.jsonObject(with: data, options: []) as? [String: Any])!
                let labelStr = labelDictionary["nodef"] as? String

                if let attributesData = labelStr!.data(using: .utf8) {
                    let attributesDictionary : [String: Any] = (try? JSONSerialization.jsonObject(with: attributesData, options: []) as? [String: Any])!
                    let loadedPageSettingsStr = attributesDictionary["page_settings"] as? String

                    print(loadedPageSettingsStr as Any)
                }
    }    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;will give you&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{  
   filters : {    
    filterList : [      
    {
      type : CIColorControls,      
      alias : "",        
      inputImageAlias : "",        
      backgroundImageAlias : "",        
      brightness : 0,        
      saturation : 1,        
      contrast : 1        
    },      
    {
      type : CISepiaTone,        
      alias : "",        
      inputImageAlias : "",        
      backgroundImageAlias : "",        
      intensity : 1      
    },      
    {        
     type : CIZoomBlur,      
     alias : "",        
     inputImageAlias : "",        
     backgroundImageAlias : "",        
     centerX : 0,        
     centerY : 0,        
     amount : 20        
    }    
    ],    
    size : [0,0]  
    }
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Compiling the Source
&lt;/h2&gt;

&lt;p&gt;Prerequisites&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;XCode 13&lt;/li&gt;
&lt;li&gt;iOS 15&lt;/li&gt;
&lt;li&gt;Build&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Download the &lt;a href="https://github.com/Misfits-Rebels-Outcasts/Nodef/tree/main/code/Nodef%20Filters%20Library"&gt;Source Code&lt;/a&gt;&lt;br&gt;
Launch XCode and load Nodef.xcodeproj &lt;br&gt;
Build and run on iPhone Simulator or Device&lt;/p&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;p&gt;Download &lt;a href="https://apps.apple.com/us/app/nodef-photo-filters-effects/id1640788489"&gt;Nodef &lt;/a&gt;on the App Store&lt;/p&gt;

</description>
      <category>swift</category>
      <category>ios</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Develop a WYSIWYG drawing app with SwiftUI</title>
      <dc:creator>dotnetmauiman</dc:creator>
      <pubDate>Tue, 26 Apr 2022 01:27:07 +0000</pubDate>
      <link>https://dev.to/dotnetmauiman/develop-a-wysiwyg-drawing-app-with-swiftui-5chn</link>
      <guid>https://dev.to/dotnetmauiman/develop-a-wysiwyg-drawing-app-with-swiftui-5chn</guid>
      <description>&lt;p&gt;This article explores the use of SwiftUI for developing a What-You-See-Is-What-You-Get (WYSIWYG) vector drawing app. A WYSIWYG vector drawing app can be thought of as one that renders different objects such as a rectangle, an ellipse, a text, or other shapes on a canvas; the moving of objects around a canvas by dragging; and changing object properties by tapping on objects. This is illustrated in the screenshot of an iOS app below. Specifically, this article explores the use of a SwiftUI View as the "Drawing Canvas" instead of using a Core Graphics Canvas or a SwiftUI Canvas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UPvtSJi9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwf55x8587hz1lq9k7le.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UPvtSJi9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwf55x8587hz1lq9k7le.png" alt="Image description" width="880" height="1173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use SwiftUI View as the drawing canvas?
&lt;/h2&gt;

&lt;p&gt;When developing a vector drawing app in the Apple ecosystem, things that come immediately onto the mind are Core Graphics or SwiftUI Canvas. Both are extremely fast, easy to use, and provide a canvas for us to draw on. Naturally, both are good choices as the canvas of a drawing app. However, when the drawing app requires WYSIWYG behavior, a Swift developer realizes that handling of object interactions such as drag, move, and resize, requires the use of Apple's Gestures and Events.&lt;/p&gt;

&lt;p&gt;This makes it hard for a Swift developer to ignore the use of a SwiftUI View as the "Drawing Canvas". This is because all Apple's Gestures and Events are directly supported by a SwiftUI View. If one is to take a step back and think about it, a SwiftUI View is designed by Apple for User Interface (UI) development, and such a View already naturally supports all the behavior required by a WYSIWYG app: rendering views and objects, and supporting gestures and events. The use of a SwiftUI View as the "Drawing Canvas" also does not prevent us from using Core Graphics, SwiftUI Canvas, or even Metal for rendering the underlying object that requires special treatment, as all three can be represented as a SwiftUI View easily.&lt;/p&gt;

&lt;h2&gt;
  
  
  The outcome of the Exploration
&lt;/h2&gt;

&lt;p&gt;Open Source&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Misfits-Rebels-Outcasts/SwiftUI-WYSIWYG-Draw"&gt;SwiftUI-WYSIWYG-Draw&lt;/a&gt; - GitHub (MIT License)&lt;br&gt;
App&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.barcoderesource.com/barcodelabelappleapp.shtml"&gt;Barcode &amp;amp; Label&lt;/a&gt; - a label design (WYSIWYG), mailing address, envelope, and barcode printing app available on Apple App Store&lt;/p&gt;

&lt;p&gt;The following section highlights the advantages, disadvantages, and challenges of using SwiftUI, with a SwiftUI View as the "Drawing Canvas", for developing a WYSIWYG app. This is probably not a suitable design for everyone, but some points highlighted may be useful consideration to a WYSIWYG drawing app developer.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Design
&lt;/h2&gt;

&lt;p&gt;As mentioned above, we will be using a SwiftUI View as the "Drawing Canvas" to draw shapes, text, or images. But how do we go about doing this? The following is as simple as it gets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;         var body: some View {        
            ZStack {
                ForEach(shapes.shapeList){
                    shape in
                    ShapeViewX()
                    }            
            }
        } 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fundamental concept is to just draw a list of shapes (represented as SwiftUI View) on a ZStack inside a SwiftUI View. The list of shapes and shape classes are defined below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;         class ShapesX{

            @Published var shapeList = [ShapeX]()

        }

        class ShapeX {

            @Published var location: CGPoint = CGPoint(x: 50, y: 50)
            @Published var size: CGSize = CGSize(width: 100, height: 100)
            @Published var canvasSize: CGSize = CGSize(width: 500, height: 500)

        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note - The code above is simplified to help convey the concepts more easily. For example, the "ObservableObject" attributes are removed.&lt;/p&gt;

&lt;p&gt;ShapesX and ShapeX can be considered as the model objects of our vector drawings. We render them in a SwiftUI View by using a SwiftUI View to represent each shape. ShapeViewX below is used for this purpose. It is interesting to note that ShapeViewX can also be implemented with a SwiftUI Canvas, a Core Image Canvas, or with Metal. For our scenario, we will use SwiftUI shapes such as Rectangle, Ellipse, or wrap a SwiftUI View around a SwiftUI Text for representing objects on the canvas.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;         struct ShapeViewX: View {

            @ObservedObject var shape: ShapeX

            //Handle the interactions such as Move and Resize
            //Draw a Bounding Box   

            //Draw the Shape        
            var body: some View {
              self.shape.view() 
              //depending of the type shape is, we can draw a rectangle or ellipse etc. 
            }
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ShapeViewX is used to render different types of shapes such as a rectangle or an ellipse. The view function for drawing a Rectangle is illustrated below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class RectangleX: ShapeX {

            @Published var strokeWidth:Double = 1
            @Published var strokeColor = Color.black
            @Published var fillColor = Color.white

            override func view() -&amp;gt; AnyView {
                AnyView(
                    Rectangle()
                        .stroke(strokeColor, lineWidth: strokeWidth)
                        .background(Rectangle().fill(fillColor))
                        .frame(width: self.size.width, height: self.size.height)
                        .position(self.location)
                        )

                }
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Besides drawing shapes, ShapeViewX can be used to easily manage all gestures and events in a single place. This is illustrated below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gestures
&lt;/h2&gt;

&lt;p&gt;The Pan (drag to move) gesture for ShapeViewX can be as simple as follow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;struct ShapeViewX: View {

            @ObservedObject var shape: ShapeX
            @State private var location: CGPoint = CGPoint(x: 50, y: 50)

            var drag: some Gesture {
              DragGesture()
                 .onChanged { value in
                     self.shape.location = value.location
                 }
            }

            var body: some View {
                ZStack
                {      
                       self.shape.view()

                   Rectangle()
                    .fill(Color.white)
                    .opacity(0.001)
                    .frame(width: self.shape.size.width, height: self.shape.size.height)
                    .position(self.shape.location)
                    .gesture(
                            drag
                    )
           }  
            }
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you test the code above, you will see a little jump when we start dragging the rectangle. The following article explains this problem well and provides a solution to this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sarunw.com/posts/move-view-around-with-drag-gesture-in-swiftui"&gt;https://sarunw.com/posts/move-view-around-with-drag-gesture-in-swiftui&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Printing Problem
&lt;/h2&gt;

&lt;p&gt;The use of a SwiftUI View as a WYSIWYG is great for handling events and gestures. But for many vector drawing apps, the need to export, print, and/or generate a PDF output are also important. If we need to generate a PDF, we can implement something like the following:&lt;/p&gt;

&lt;h2&gt;
  
  
  A Manual Solution
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;         let pageRect = CGRect(x:0, y:0, width: 8.5*72, height: 11.0*72)

         let renderer = UIGraphicsPDFRenderer(bounds:pageRect , format: format)    

         let pdf = renderer.pdfData { (context) in
            context.beginPage()
            let attributes = [
              NSFontAttributeName : UIFont.boldSystemFont(ofSize: 150)
            ]
            let text = "Hello!" as NSString
            text.draw(in: CGRect(x: 0, y: 0, width: 500, height: 200), withAttributes: attributes)
          }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above, we draw a text ("Hello") in the PDF. We can imagine that we need to translate each of our shapes and text into PDF with codes similar to the above. This is useful if a fine-grain control of every single rendering detail is required. It can, however, be tedious.&lt;/p&gt;

&lt;h2&gt;
  
  
  Convert a SwiftUI View to PDF
&lt;/h2&gt;

&lt;p&gt;Alternatively, we can convert a SwiftUI View to PDF. This is a quick and easy way if we do not want to manually translate each shape into PDF.&lt;/p&gt;

&lt;p&gt;See the following for more information:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/60753436/convert-swiftui-view-to-pdf-on-ios?msclkid=ead66797becb11ec9c47d91ad8634470"&gt;https://stackoverflow.com/questions/60753436/convert-swiftui-view-to-pdf-on-ios?msclkid=ead66797becb11ec9c47d91ad8634470&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code below illustrates how to convert a SwiftUI View and its children's objects to PDF by using UIGraphicsPDFRenderer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func exportToPDF() {

            let documentDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
            let outputFileURL = documentDirectory.appendingPathComponent("SwiftUI.pdf")

            let width: CGFloat = 8.5 * 72.0
            let height: CGFloat = 11.0 * 72.0
            let printout = SwiftUICanvasView()

            let pdfVC = UIHostingController(rootView: printout)
            pdfVC.view.frame = CGRect(x: 0, y: 0, width: width, height: height)

            //Render the view behind all other views
            let rootVC = UIApplication.shared.windows.first?.rootViewController
            rootVC?.addChild(pdfVC)
            rootVC?.view.insertSubview(pdfVC.view, at: 0)

            //Render the PDF
            let pdfRenderer = UIGraphicsPDFRenderer(bounds: CGRect(x: 0, y: 0, width: 8.5 * 72.0, height: height))

            do {
                try pdfRenderer.writePDF(to: outputFileURL, withActions: { (context) in
                    context.beginPage()
                    pdfVC.view.layer.render(in: context.cgContext)
                })

                self.exportURL = outputFileURL
                self.showExportSheet = true

            }catch {
                self.showError = true
                print("Could not create PDF file: \(error)")
            }

            pdfVC.removeFromParent()
            pdfVC.view.removeFromSuperview()
        }               
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Problem with converting a SwiftUI View to PDF
&lt;/h2&gt;

&lt;p&gt;If you look at the conversation thread more closely, you will notice the following:&lt;/p&gt;

&lt;p&gt;"When I tried to generate a PDF file using the the solutions in other answers I only got a blurred PDF and the quality was far from good." pawello2222&lt;/p&gt;

&lt;p&gt;When a SwiftUI View is converted into a PDF, it is converted at 72 Dots-Per-Inch (DPI). If your drawing contains high-resolution objects such as images, they will become blurry in the PDF. A low-resolution output for a WYSIWYG graphics app is not ideal. If we cannot accept a lower resolution output, does it mean we are back to the manual method of converting our SwiftUI View to a PDF?&lt;/p&gt;

&lt;p&gt;"I ended up generating the SwiftUI view in a much bigger frame and scaled down the context to the appropriate size." pawello2222&lt;/p&gt;

&lt;h2&gt;
  
  
  The Possible Solution
&lt;/h2&gt;

&lt;p&gt;One solution could be to render the "Canvas" (SwiftUI View) at a required DPI (e.g. 300 DPI) and then scale it down with "scaleEffect" for viewing on the app. For example, we can render an 8.5 x 11.0 (inches) canvas at 300 DPI. To view it on the app, we can scale it down by a factor of 72.0/300.0 (72 DPI is the DPI used by an iOS app) as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            SwiftUICanvasView() //at 300 DPI
            .frame(width: 8.5*300, height: 11.0*300, alignment: .center)
            .scaleEffect(72.0/300.0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When rendering SwiftUICanvasView, at 300 DPI, to a PDF, our PDF and its objects will no longer be blurry.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Additional Complexity
&lt;/h2&gt;

&lt;p&gt;The design of rendering a canvas at a higher DPI and then scaling it down for viewing will undoubtedly increase the app's resource requirements. There will also be additional code development complexity. For example, a text that used to be rendered at a font size of 16 (in 72 DPI), when rendered on a 300 DPI canvas, will require a font size of 16 * 300.0/72.0. This translation of font size will need to be managed. The code below shows how to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class TextX: ShapeX {

    //@Published var fontSize:CGFloat = 16.0
    @Published var fontSize:CGFloat = 16.0*300.0/72.0

}        
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What have we done?
&lt;/h2&gt;

&lt;p&gt;In summary, we detailed the use of a SwiftUI View as a drawing canvas for rendering shapes and text and handling of gestures and events. We drew the SwiftUI View in 300 DPI and used "scaleEffect" to scale it down for viewing on the app. We then used the SwiftUI View (at 300 DPI) and converted it to a 72 PDF. We also highlighted the additional complexity of doing so.&lt;/p&gt;

&lt;p&gt;Is the above design good for everyone developing a WYSIWYG app? Definitely not! But I hope, at least some parts are useful for anyone considering the development of a WYSIWYG app with SwiftUI. For us, we have used it to develop a label design WYSIWYG and barcode printing app.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;sunt lacrimae rerum et mentem mortalia tangunt - aeneas&lt;/em&gt;&lt;/p&gt;

</description>
      <category>swift</category>
      <category>swiftui</category>
      <category>opensource</category>
      <category>wysiwyg</category>
    </item>
    <item>
      <title>.NET MAUI with .NET CLI on Windows 11 with Android Emulator</title>
      <dc:creator>dotnetmauiman</dc:creator>
      <pubDate>Fri, 21 Jan 2022 23:25:39 +0000</pubDate>
      <link>https://dev.to/dotnetmauiman/net-maui-with-net-cli-on-windows-11-with-android-emulator-3hnb</link>
      <guid>https://dev.to/dotnetmauiman/net-maui-with-net-cli-on-windows-11-with-android-emulator-3hnb</guid>
      <description>&lt;p&gt;This article describes how to develop a .NET MAUI app with the .NET CLI (Command Line Interface).&lt;/p&gt;

&lt;p&gt;.NET Multi-platform App UI (.NET MAUI) is a cross-platform framework for creating native mobile and desktop apps with C# and XAML that can run on Android, iOS, macOS, and Windows from a single shared code-base.&lt;/p&gt;

&lt;p&gt;1 Install .NET SDK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dotnet.microsoft.com/en-us/download"&gt;https://dotnet.microsoft.com/en-us/download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2 Install .NET MAUI workload with the dotnet CLI. A .NET "workload" is a collection of packs. Launch a command prompt and enter the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;dotnet workload install maui&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fp15TcJu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pisb67apx2wty0qyysyt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fp15TcJu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pisb67apx2wty0qyysyt.png" alt="dotnet maui workload install" width="880" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check that the MAUI workload installs successfully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;dotnet workload list&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RCeCKOAM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6lwb1paimiq738ok2ezc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RCeCKOAM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6lwb1paimiq738ok2ezc.png" alt="maui workload list" width="880" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3 Verify and install missing components with maui-check command line utility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;dotnet tool install -g redth.net.MAUI.check&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;maui-check&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;4 Create a new folder and a new MAUI app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mkdir MyMauiApp&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cd MyMauiApp&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;dotnet new maui&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;5 The "dotnet new maui" command generates a template for a .NET MAUI app with all the necessary project, C#, and XAML files. You can use any editor such as Notepad or Visual Studio Code to edit the files.&lt;/p&gt;

&lt;p&gt;If you are using Visual Studio Code, enter the following command in the Command Prompt to view the source code generated in the "MauiQRCodeApp" folder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;code .&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;6 Start an Android Emulator with the following command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"C:\Program Files (x86)\Android\android-sdk\emulator\emulator.EXE" -no-boot-anim -avd pixel_5_-&lt;em&gt;api_30 -prop monodroid.avdname=pixel_5&lt;/em&gt;-_api_30&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;7 Run the MAUI app in an Android Simulator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;dotnet build -t:Run -f net6.0-android&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5aY7elPJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lqxp2i7j4zlv7chdkdyu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5aY7elPJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lqxp2i7j4zlv7chdkdyu.png" alt="maui android" width="802" height="1786"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>maui</category>
      <category>csharp</category>
      <category>programming</category>
    </item>
    <item>
      <title>Getting started with .NET MAUI (iOS) on an M1 Mac</title>
      <dc:creator>dotnetmauiman</dc:creator>
      <pubDate>Fri, 21 Jan 2022 05:12:51 +0000</pubDate>
      <link>https://dev.to/dotnetmauiman/getting-started-with-net-maui-ios-on-an-m1-mac-439p</link>
      <guid>https://dev.to/dotnetmauiman/getting-started-with-net-maui-ios-on-an-m1-mac-439p</guid>
      <description>&lt;p&gt;The easiest way of getting started with .NET MAUI development (iOS) on an M1 Mac.&lt;/p&gt;

&lt;p&gt;.NET Multi-platform App UI (.NET MAUI) is a cross-platform framework for creating native mobile and desktop apps with C# and XAML that can run on Android, iOS, macOS, and Windows from a single shared code-base.&lt;/p&gt;

&lt;p&gt;1 Install macOS Arm64 .NET SDK on M1 Mac.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dotnet.microsoft.com/en-us/download"&gt;https://dotnet.microsoft.com/en-us/download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qQWkmdby--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h18haqx8k2nxq8vkb8k6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qQWkmdby--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h18haqx8k2nxq8vkb8k6.png" alt="Install .NET SDK" width="880" height="653"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alternatively you can also install Visual Studio 2022 for Mac preview.&lt;/p&gt;

&lt;p&gt;2 Install .NET MAUI workload with the dotnet CLI. A .NET "workload" is a collection of packs. Launch "Terminal" and enter the following:&lt;/p&gt;

&lt;p&gt;dotnet workload install maui&lt;/p&gt;

&lt;p&gt;Note - You may need "sudo" in front of the command to enable the necessary security privileges.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a2jNBwcP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hipgaxflubuddn7kdedf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a2jNBwcP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hipgaxflubuddn7kdedf.png" alt="MAUI Workload" width="880" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check that the MAUI workload is installed successfully.&lt;/p&gt;

&lt;p&gt;dotnet workload list&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yJSl-5AN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1yhr62j76sh1d93ghv4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yJSl-5AN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1yhr62j76sh1d93ghv4b.png" alt="dotnet workload list" width="880" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3 Create a new folder and a new MAUI app.&lt;/p&gt;

&lt;p&gt;mkdir MyMauiApp&lt;/p&gt;

&lt;p&gt;cd MyMauiApp&lt;/p&gt;

&lt;p&gt;dotnet new maui&lt;/p&gt;

&lt;p&gt;4 Run the MAUI app in an iOS Simulator.&lt;/p&gt;

&lt;p&gt;dotnet build -t:Run -f net6.0-ios&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hiMC1vK0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9qxn9n6a3x0ufddhfjb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hiMC1vK0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9qxn9n6a3x0ufddhfjb.png" alt="MAUI on iOS" width="880" height="1294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5 Install Visual Studio Code for Mac.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://code.visualstudio.com/Download"&gt;https://code.visualstudio.com/Download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6 Launch Visual Studio Code and open the "MyMauiApp" folder to view the source code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S8ynTJsY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pngbcsprlqd0kici2c2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S8ynTJsY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pngbcsprlqd0kici2c2j.png" alt=".NET MAUI VSCODE" width="880" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>maui</category>
      <category>programming</category>
      <category>xamarin</category>
    </item>
  </channel>
</rss>
