<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Axel</title>
    <description>The latest articles on DEV Community by Axel (@7axel).</description>
    <link>https://dev.to/7axel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/7axel"/>
    <language>en</language>
    <item>
      <title>🧠 CNN from Scratch</title>
      <dc:creator>Axel</dc:creator>
      <pubDate>Fri, 08 Aug 2025 14:38:37 +0000</pubDate>
      <link>https://dev.to/7axel/cnn-from-scratch-49ji</link>
      <guid>https://dev.to/7axel/cnn-from-scratch-49ji</guid>
      <description>&lt;p&gt;This project is a simple &lt;strong&gt;Convolutional Neural Network (CNN)&lt;/strong&gt; implemented &lt;strong&gt;entirely from scratch&lt;/strong&gt; using only low-level libraries like NumPy, PIL, and SciPy—&lt;strong&gt;no deep learning frameworks&lt;/strong&gt; (e.g., TensorFlow or PyTorch) are used. It includes image preprocessing, convolution and pooling operations, ReLU and softmax activations, forward/backward propagation, and a fully connected classifier.&lt;/p&gt;

&lt;h2&gt;
  
  
  📦 Releases
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;th&gt;Latest&lt;/th&gt;
&lt;th&gt;Stable&lt;/th&gt;
&lt;th&gt;Test a trained model&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/77AXEL/CNN-FS/releases/tag/v0.1.1" rel="noopener noreferrer"&gt;0.1.1&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;&lt;a href="https://cnnfsmodel.pythonanywhere.com/cnnfs/v0.1.0/predict" rel="noopener noreferrer"&gt;Test&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/77AXEL/CNN-FS/releases/tag/v0.1.0" rel="noopener noreferrer"&gt;0.1.0&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;&lt;a href="https://cnnfsmodel.pythonanywhere.com/cnnfs/v0.1.0/predict" rel="noopener noreferrer"&gt;Test&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🚀 Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Manual image preprocessing (RGB separation, resizing, normalization)&lt;/li&gt;
&lt;li&gt;Handcrafted convolution and max-pooling operations&lt;/li&gt;
&lt;li&gt;Fully connected layers (L1, L2, and output)&lt;/li&gt;
&lt;li&gt;Softmax + Cross-Entropy Loss&lt;/li&gt;
&lt;li&gt;Mini-batch gradient descent with backpropagation&lt;/li&gt;
&lt;li&gt;Model saving/loading using &lt;code&gt;pickle&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Class prediction on new images&lt;/li&gt;
&lt;li&gt;Realtime training visualization using &lt;code&gt;matplotlib&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🖼 Dataset Structure
&lt;/h2&gt;

&lt;p&gt;Make sure your dataset folder is structured like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data/
├── class1/
│   ├── image1.png
│   ├── image2.png
├── class2/
│   ├── image1.png
│   ├── image2.png
├── class../
│   ├── ..
..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each subfolder represents a class (e.g., &lt;code&gt;cat&lt;/code&gt;, &lt;code&gt;dog&lt;/code&gt;), and contains sample images.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  To help you get started, we’ve included a &lt;a href="https://github.com/77AXEL/CNN-FS/tree/main/data" rel="noopener noreferrer"&gt;starter &lt;code&gt;data&lt;/code&gt; folder&lt;/a&gt; with example class directories.
&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🧪 How It Works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Image Preprocessing&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Each image is resized to a fixed size and normalized.&lt;/li&gt;
&lt;li&gt;Filters (e.g., sharpening, edge detection) are applied using 2D convolution.&lt;/li&gt;
&lt;li&gt;ReLU activation and 2×2 max-pooling reduce spatial dimensions.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Feature Vector&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Flattened pooled feature maps are fed into fully connected layers.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Feedforward + Softmax&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Dense layers compute activations followed by a softmax for classification.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Backpropagation&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Gradients are computed layer-by-layer.&lt;/li&gt;
&lt;li&gt;Weights and biases are updated using basic gradient descent.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛠 Setup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;git+https://github.com/77AXEL/CNN-FS.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ✅ Training
&lt;/h2&gt;

&lt;p&gt;Update and run the training block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cnnfs.model&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CNN&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CNN&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;image_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;batch_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;h1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;learning_rate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;epochs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;dataset_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# Your dataset folder path
&lt;/span&gt;    &lt;span class="n"&gt;max_image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# If not specified, the model will load all images for each class
&lt;/span&gt;    &lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt;
        &lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt;
        &lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# If not specified, the model will use its own default filters
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load_dataset&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;# Processes all images for each class to prepare them for later use in training
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;train_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;visualize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Starts model training based on the classes in your dataset with optional visualization support
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save_model&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;# Stores the trained model's weights and biases in a model.bin file
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🔍 Predicting New Images
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model.bin&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Load the trained model
&lt;/span&gt;&lt;span class="n"&gt;prediction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test_images/mycat.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Applies the trained model to classify the input image
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Predicted class:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prediction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  💡 Example Filters Used
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ [0, -1,  0],   Sharpen
  [-1, 5, -1],
  [0, -1,  0] ]

[ [1,  0, -1],   Edge detection
  [1,  0, -1],
  [1,  0, -1] ]

[[-1, -1, -1],   Laplacian
 [-1,  8, -1],
 [-1, -1, -1] ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  📊 Performance
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value (example)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Accuracy&lt;/td&gt;
&lt;td&gt;~90% (binary class)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Epochs&lt;/td&gt;
&lt;td&gt;10–50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dataset&lt;/td&gt;
&lt;td&gt;Custom / ~8000 imgs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  * Note that a larger dataset and more training epochs typically lead to higher accuracy.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  🧠 Concepts Demonstrated
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;CNNs without frameworks&lt;/li&gt;
&lt;li&gt;Data vectorization&lt;/li&gt;
&lt;li&gt;Forward and backward propagation&lt;/li&gt;
&lt;li&gt;Optimization from scratch&lt;/li&gt;
&lt;li&gt;One-hot encoding for multi-class classification&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📦 Dependencies
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://numpy.org" rel="noopener noreferrer"&gt;NumPy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pypi.org/project/pillow/" rel="noopener noreferrer"&gt;Pillow&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://scipy.org" rel="noopener noreferrer"&gt;SciPy&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📜 License
&lt;/h2&gt;

&lt;p&gt;MIT License — feel free to use, modify, and share.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤝 Contributing
&lt;/h2&gt;

&lt;p&gt;PRs are welcome! You can help:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add evaluation functions&lt;/li&gt;
&lt;li&gt;Improve filter design&lt;/li&gt;
&lt;li&gt;Extend to grayscale or multi-channel separately&lt;/li&gt;
&lt;li&gt;Parallelize dataset loading&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;. Github repo: &lt;a href="https://github.com/77AXEL/CNN-FS" rel="noopener noreferrer"&gt;https://github.com/77AXEL/CNN-FS&lt;/a&gt;&lt;br&gt;
. Github page: &lt;a href="https://77axel.github.io/CNN-FS" rel="noopener noreferrer"&gt;https://77axel.github.io/CNN-FS&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Web2APK</title>
      <dc:creator>Axel</dc:creator>
      <pubDate>Thu, 27 Jun 2024 15:05:40 +0000</pubDate>
      <link>https://dev.to/7axel/web2apk-m4c</link>
      <guid>https://dev.to/7axel/web2apk-m4c</guid>
      <description>&lt;h4&gt;
  
  
  Presentation:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Our GitHub repository houses a transformative project that automates the conversion of HTML, CSS, and JavaScript front-end projects into Android applications. This tool streamlines the process, enabling developers to port their web projects to Android without extensive manual effort, enhancing cross-platform development efficiency.
#### Installation:&lt;/li&gt;
&lt;li&gt;If Git is not installed, you can obtain the tool by clicking the &lt;a href="https://github.com/77AXEL/Web2APK/archive/refs/heads/main.zip" rel="noopener noreferrer"&gt;Download&lt;/a&gt; button&lt;/li&gt;
&lt;li&gt;If Git is already installed, you can utilize this command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/77AXEL/Web2APK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Use
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;To use the tool, follow these steps:&lt;/li&gt;
&lt;li&gt;1) Develop a front-end project similar to this example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnncxcj6htg2jpffegxzf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnncxcj6htg2jpffegxzf.png" alt=" " width="124" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2) Compress the project folder into a ZIP file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqnho5ysra9114samlqz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqnho5ysra9114samlqz.png" alt=" " width="777" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3) Navigate to the Web2APK directory and run this cammand:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python web2apk.py -zip path_to_your_zip_file -icon path_to_your_desired_icon -name your_desired_app_name 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once you run this command, the tool will start compiling and building the APK file. After compiling, you will get output like this:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ap5eukwn3t21f86fo85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ap5eukwn3t21f86fo85.png" alt=" " width="220" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finally, you will find the compiled APK in the dist directory:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjmio3idhoilhwtt26zq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjmio3idhoilhwtt26zq.png" alt=" " width="207" height="108"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Using the WebP image format for the app icon is recommended.&lt;/li&gt;
&lt;li&gt;If you encounter any problem or issue with the tool, you can check the build.log and sign.log files located in the log folder&lt;/li&gt;
&lt;li&gt;Using this tool requires having the JAVA JDK and ANDROID SDK installed, with their paths, JAVA_HOME and ANDROID_HOME, set in your environment path&lt;/li&gt;
&lt;li&gt;If you don't have them installed yet, follow those links:
&lt;a href="https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html" rel="noopener noreferrer"&gt;Java JDK&lt;/a&gt;
&lt;a href="https://developer.android.com/studio?gad_source=1&amp;amp;gclid=CjwKCAjw1emzBhB8EiwAHwZZxaDZomNDa979EuJ6E2Xjgrp4o-NiDyc36wXADYMinU0JmuodKHYPsBoCC40QAvD_BwE&amp;amp;gclsrc=aw.ds&amp;amp;hl=fr" rel="noopener noreferrer"&gt;Android SDK&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Platforms
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;Supported Platform : &lt;strong&gt;&lt;code&gt;Windows&lt;/code&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;code&gt;Mac-OS&lt;/code&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;code&gt;Ubuntu/Debian/Kali/Parrot/Arch Linux&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;If you like this project, star or sponsor our repo on github from here &lt;a href="https://github.com/77AXEL/Web2APK" rel="noopener noreferrer"&gt;Web2APK&lt;/a&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.shields.io%2Fbadge%2FAuthor-A.X.E.L-red%3Fstyle%3Dflat-square%3B" width="98" height="20"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.shields.io%2Fbadge%2FOpen%2520Source-Yes-red%3Fstyle%3Dflat-square%3B" width="110" height="20"&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>android</category>
      <category>webtoapp</category>
      <category>html</category>
      <category>python</category>
    </item>
    <item>
      <title>NetDisco</title>
      <dc:creator>Axel</dc:creator>
      <pubDate>Fri, 21 Jun 2024 15:53:22 +0000</pubDate>
      <link>https://dev.to/7axel/netdisco-1ipo</link>
      <guid>https://dev.to/7axel/netdisco-1ipo</guid>
      <description>&lt;h2&gt;
  
  
  Presentation:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A versatile app designed for both Windows and Linux platforms. This tool efficiently scans the LAN network, displaying detailed information such as IP addresses, MAC addresses, and host names of devices connected to the same Wi-Fi network as the host PC. Its capability to execute seamlessly across different operating systems enhances user accessibility and provides valuable insights into network connectivity, making it an invaluable tool for both home and professional use.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;When you run the app, it initiates a scan of your local LAN network, systematically searching for connected devices. The interface displays ongoing progress as it identifies each device, showcasing detailed information such as IP addresses, MAC addresses, and host names. This real-time visibility into your network environment provides a comprehensive view of all devices connected to the same Wi-Fi network as your PC, ensuring you stay informed about your network's status and security&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Captures&lt;/em&gt;&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0z9l4955uq59aqsjsrm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0z9l4955uq59aqsjsrm.png" alt=" " width="602" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you want to initiates another scann press &lt;strong&gt;Update&lt;/strong&gt; button:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7wfpe45l2iiigyzkd3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7wfpe45l2iiigyzkd3y.png" alt=" " width="602" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Download:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt; You can download the app by pressing &lt;a href="https://github.com/77AXEL/NetDisco/raw/main/NetDisco.exe?download=" rel="noopener noreferrer"&gt;Download&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;On Windows, you can launch the executable directly. However, on Linux-based systems, you'll need to use an emulator like Wine to run the application:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wine NetDisco.exe
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup allows the app to function seamlessly across different operating environments, ensuring compatibility and accessibility for users on varying platforms&lt;/p&gt;

&lt;h2&gt;
  
  
  URLs:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You can also view the project on GitHub: &lt;a href="https://github.com/77AXEL/NetDisco/" rel="noopener noreferrer"&gt;https://github.com/77AXEL/NetDisco/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>networking</category>
      <category>windows</category>
      <category>linux</category>
      <category>python</category>
    </item>
  </channel>
</rss>
