<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aditya Pratap Singh</title>
    <description>The latest articles on DEV Community by Aditya Pratap Singh (@devmrfitz).</description>
    <link>https://dev.to/devmrfitz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/devmrfitz"/>
    <language>en</language>
    <item>
      <title>Accelerating Media Processing with Media Master, using Courier and Azure Functions</title>
      <dc:creator>Aditya Pratap Singh</dc:creator>
      <pubDate>Fri, 02 Dec 2022 04:29:09 +0000</pubDate>
      <link>https://dev.to/devmrfitz/accelerating-media-processing-with-media-master-using-courier-and-azure-functions-1f3l</link>
      <guid>https://dev.to/devmrfitz/accelerating-media-processing-with-media-master-using-courier-and-azure-functions-1f3l</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/TcRUcUhXtXM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Currently, to process images or videos, servers need to process the media while they are being uploaded. This takes a lot of processing power, memory, and time. Media Master applies processing on the videos and images after they are uploaded to the temporary folder and then transfers the processed files to permanent storage.&lt;/p&gt;

&lt;p&gt;Any website that accepts media content (photos, videos, etc) does some or the other form of background processing on said media. If this processing runs in the main thread of the web server, it could lead to horrible response times and also increase susceptibility to a Denial of Service attack. Operations like compression, validation, re-encoding and watermarking are often essential to services of many popular web apps. Even more important is the guarantee of being notified whenever something goes wrong in crucial operations like these.&lt;/p&gt;

&lt;p&gt;I was motivated to create this program after learning about the flow of video processing on YouTube. So, what exactly does it do? When a user uploads a video, the clip is kept temporarily for processing reasons, and an acknowledgement is delivered to the user on the fly. The user is not obliged to be online during the processing. The video will be processed in stages and layers, and the relevant outcome will be released publicly upon completion of each phase. For example, if a video has been processed and finalised at 144p, it will be immediately accessible for streaming at that resolution without waiting for 240p or 480p render versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Tools used&lt;/u&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Azure Functions: Azure Functions is the ideal tool for asynchronous background tasks like ours. Azure Functions is a serverless solution that allows to write less code and maintain less infrastructure. Azure Functions provide "compute on-demand" in two ways:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Allows to implement system's logic into readily available blocks of code. These code blocks are called "functions". Different functions can run anytime when required to respond to critical events.
2. On increase in requests, Azure Functions meets the demand with as many resources and function instances as necessary - but only while needed. As requests fall, any extra resources and application instances drop off automatically.
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Courier: Courier provides an amazing suite of notification integrations. Courier is an API and web studio for development teams to manage all product-triggered communications (email, chat, in-app, SMS, push, etc.) in one place. This is how Courier works:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Application events can be sent to Courier via their API or SDK
2. Courier receives and processes events that provide information about the notification content and receiver.
3. Courier creates a notice template and sends it to the appropriate provider (supports over 60 providers across all channels).
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Roadmap
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Storing the files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Detect uploads on the storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Converting Media files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Processing the files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sending a feedback to the users on successful or failed uploading of files&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Part 1: Storing Files
&lt;/h3&gt;

&lt;p&gt;The first step was to store files for processing. I found Azure Blob Storage to be an ideal choice as it flexibly scales up for high-performance computing and is highly secured using authentication with Azure Active Directory and RBAC along with rest encryption.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy47nbgu0scin79br1id8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy47nbgu0scin79br1id8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Imported BlobServiceClient and ContentSettings from azure.storage.blob to employ Azure Storage resources and blob containers. I have set the container name(CONTAINER_NAME) as “devmrfitz”. Then I called get_container_client to get a reference to a BlobClient object and upload the tempfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import azure.functions as func

from azure.storage.blob import BlobServiceClient

import os 

AZURE_CONNECTION_STRING = os.getenv('AzureWebJobsStorage')

CONTAINER_NAME = "devmrfitz"

COURIER_API_KEY = os.getenv("COURIER_API_KEY") 

def main(myblob: func.blob.InputStream):

        blob_service_client: BlobServiceClient = BlobServiceClient.from_connection_string(AZURE_CONNECTION_STRING)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Part 2: Detecting uploads on the storage
&lt;/h3&gt;

&lt;p&gt;To detect uploads on Azure Blob Storage, we used Azure Functions. Basically, it is a serverless compute service to run event-triggered code where event, in our case, is upload. It runs a script or piece of code in response to a variety of events. Imported azure.functions and used to azure.functions.InputStream to detect file upload.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0rz9q13hrvewutuks7p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0rz9q13hrvewutuks7p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Part 3: Converting Media files
&lt;/h3&gt;

&lt;p&gt;Now, the tedious task was to process video files which was made extremely easy using FFmpeg which uses demuxers to read input files and get packets containing encoded data from. In case of video files, ffmpeg tries to keep them synchronized by tracking lowest timestamp on any active input stream. It uses libavfilter library under the hood to enable the use of filters to process raw audio and video.&lt;/p&gt;

&lt;h3&gt;
  
  
  Part 4: Processing the files
&lt;/h3&gt;

&lt;p&gt;To support inversion, resizing, watermarking and trimming and compressing of videos and images, I used PIL(Pillow) library which is a Python Imaging Library which contains all the methods which supported my usecase.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Image inversion: ImageOps.invert(image)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Image compression: image.save(destination_path, optimize=True, quality=quality)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Image resizing: image.resize((width, height))&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Image watermarking on an Image: image.paste(watermark, (0, 0), watermark) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Text watermarking on an Image: image.paste(watermark, (px, py, px + wx, py + wy), watermark)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trimming videos: subprocess.run([FFMPEG_PATH, ‘-i’, video_path, ‘-ss’, start_time, ‘-to’, end_time, ‘-c’, ‘copy’, output_path, ‘-accurate_seek’], capture_output=True, text=True)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Compressing videos: subprocess.run([FFMPEG_PATH, ‘-i’, video_path, ‘-c:v’, ‘libx265’, ‘-crf’, str(crf), ‘-c:a’, ‘copy’, output_path], capture_output=True, text=True)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6yudz9cj5vis90d86i1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6yudz9cj5vis90d86i1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Part 5: Sending feedback
&lt;/h3&gt;

&lt;p&gt;There needs to be a response mechanism that informs the user as well as the server about successful upload and the processing that follows it. To implement this, I used Courier service which is a multi-channel notification service that enables us to send notifications to users using emails(my choice for this project), Discord notification, Slack message, etc. &lt;/p&gt;

&lt;p&gt;Courier passes messages to Integrations via the &lt;a href="https://www.courier.com/docs/reference/send/message/" rel="noopener noreferrer"&gt;Send endpoint&lt;/a&gt;. We must send an Authorization header with each request. The Courier Send API also requires an event. The authorization token and event values are the "Auth Token" and "Notification ID" we see in the detail view of our “Test Appointment Reminder” event. Click the gear icon next to the Notification's name to reveal them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftdtrqxwkgfre9dhjs14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftdtrqxwkgfre9dhjs14.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These variables can finally be fed into &lt;a href="https://pypi.org/project/trycourier/" rel="noopener noreferrer"&gt;Courier's Python SDK&lt;/a&gt; to facilitate simple notification sending.&lt;/p&gt;

&lt;p&gt;Courier works by taking in an event as input via an API, which is in our case, a successful upload. The event is accompanied with the data required for the feedback and details of the recipient. It then generates a notification and sends it through the channel specified. Here, I have chosen emails to be the channel for receiving notification.&lt;/p&gt;

&lt;p&gt;Imported trycourier package from Courier service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from trycourier import Courier
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Added all the metadata:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
try:

    ...

except UnidentifiedImageError as e:

    # Send courier notification

    if "email" in metadata:

        courier_client = Courier(auth_token=COURIER_API_KEY)

        courier_client.send_message(

        message={
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Receiver’s email
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            "to": {
            "email": metadata["email"],

            },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Subject and body content
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            "content": {

            "title": "Media Master Warning! Unidentified Image detected",

            "body": "Hello {{emailPrefix}},\n\nAn unidentified file ({{name}}) was uploaded as an image and was not processed. The error generated was: \n\n\n {{error}} \n\nThanks,\nMedia Master",

            },

            "data": {

            "emailPrefix": metadata["email"].split("@")[0],

            "name": myblob.name,

            "error": str(e),

            },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Specific channel
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            "routing": {

            "method": "single",

            "channels": ["email"],

            },

        }

        )

    logging.warning(f"Unidentified image: {e}")

except UnidentifiedVideoError as e:

    # Send courier notification

    if "email" in metadata:

        courier_client = Courier(auth_token=COURIER_API_KEY)

        courier_client.send_message(

        message={

            "to": {

            "email": metadata["email"],

            },

            "content": {

            "title": "Media Master Warning! Unidentified Video detected",

            "body": "Hello {{emailPrefix}},\n\nAn unidentified file ({{name}}) was uploaded as a video and was not processed. The error generated was: \n\n\n {{error}} \n\nThanks,\nMedia Master",

            },

            "data": {

            "emailPrefix": metadata["email"].split("@")[0],

            "name": myblob.name,

            "error": str(e),                

            },

            "routing": {

            "method": "single",

            "channels": ["email"],

            },

        }

        )

    logging.warning(f"Unidentified video: {e}")        

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Our fast, light-weight and easy-to-use media processing service is ready to be used and can help a lot of students and professionals in their day-to-day hustle.&lt;/p&gt;

&lt;p&gt;What new features and improvements can you think of for Media Master? Pull requests and forks are always welcome at the Github repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  About the Author
&lt;/h2&gt;

&lt;p&gt;I'm Aditya Pratap Singh, a full stack developer and a junior at IIIT Delhi, India. I have worked with various languages and frameworks all the way from Javascript to C++. Hit me up &lt;a class="mentioned-user" href="https://dev.to/devmrfitz"&gt;@devmrfitz&lt;/a&gt; on any popular social platform (&lt;a href="https://linktr.ee/devmrfitz" rel="noopener noreferrer"&gt;https://linktr.ee/devmrfitz&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.courier.com/docs/guides/getting-started/send-message/" rel="noopener noreferrer"&gt;https://www.courier.com/docs/guides/getting-started/send-message/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/products/storage/blobs/" rel="noopener noreferrer"&gt;https://azure.microsoft.com/en-us/products/storage/blobs/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/developer/python/sdk/azure-sdk-overview" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/azure/developer/python/sdk/azure-sdk-overview&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-in/products/functions/" rel="noopener noreferrer"&gt;https://azure.microsoft.com/en-in/products/functions/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/devmrfitz/media-master" rel="noopener noreferrer"&gt;https://github.com/devmrfitz/media-master&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Background media processing with automated Courier notifications</title>
      <dc:creator>Aditya Pratap Singh</dc:creator>
      <pubDate>Fri, 14 Oct 2022 17:43:45 +0000</pubDate>
      <link>https://dev.to/devmrfitz/background-media-processing-with-automated-courier-notifications-2o1n</link>
      <guid>https://dev.to/devmrfitz/background-media-processing-with-automated-courier-notifications-2o1n</guid>
      <description>&lt;p&gt;Any website that accepts media content (photos, videos, etc) does some or the other form of background processing on said media. Operations like compression, validation, re-encoding and watermarking are often essential to services of many popular web apps. Even more important is the surety of being notified whenever something goes wrong in crucial operations like these.&lt;/p&gt;

&lt;p&gt;Hence, when participating in Courier hackathon, I thought of building something that might help the developer community. While researching serverless functions, I realised that their nature is to handle short bursty workloads asynchronously. Further, when I asked my experienced seniors about these kinds of workloads, they informed me that the management of media files in the backends of websites is a huge task which could use the asynchronous nature of serverless functions.&lt;/p&gt;

&lt;p&gt;The first step is to decide the types of media we'll be operating upon. Since I was building a simple PoC, I decided to stick with the standard - Photos and Videos. Now, we need to decide what operations our utility will support. Here are the ones I went with (feel free to choose your own):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compression&lt;/li&gt;
&lt;li&gt;Watermarking&lt;/li&gt;
&lt;li&gt;Inversion&lt;/li&gt;
&lt;li&gt;Resizing&lt;/li&gt;
&lt;li&gt;Trimming&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, we'll build standalone functions to support these operations, as can be seen in &lt;code&gt;utils.py&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Finally, onto the most important thing. Every code has edge cases that it fails on. We need to be certain that we'll be notified of all failures so we can take appropriate action if and when an exception occurs. Hence, we'll wrap relevant parts of code in a try except block. &lt;/p&gt;

&lt;p&gt;This is where the beauty of Courier's seamless notification API comes in. It allows us to easily integrate the code to send a variety of notifications to admins (i.e. us) as well as users about errors in processing of files. It supports a variety of notification services, from emails to Slack to platform native notifications. &lt;/p&gt;

&lt;p&gt;Courier passes messages to Integrations via the &lt;a href="https://www.courier.com/docs/reference/send/message/"&gt;Send endpoint&lt;/a&gt;. We must send an Authorization header with each request. The Courier Send API also requires an event. The authorization token and event values are the "Auth Token" and "Notification ID" we see in the detail view of our “Test Appointment Reminder” event. Click the gear icon next to the Notification's name to reveal them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HE6Z1kyY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ftdtrqxwkgfre9dhjs14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HE6Z1kyY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ftdtrqxwkgfre9dhjs14.png" alt="Image description" width="880" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These variables can finally be fed into &lt;a href="https://pypi.org/project/trycourier/"&gt;Courier's Python SDK&lt;/a&gt; to facilitate simple notification sending.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>To GSoC and beyond...</title>
      <dc:creator>Aditya Pratap Singh</dc:creator>
      <pubDate>Mon, 26 Sep 2022 10:44:57 +0000</pubDate>
      <link>https://dev.to/devmrfitz/to-gsoc-and-beyond-4m34</link>
      <guid>https://dev.to/devmrfitz/to-gsoc-and-beyond-4m34</guid>
      <description>&lt;p&gt;I've been an avid open source contributor over the past year. It is a great experience finding interesting open source projects that I love and contributing to them in whatever small ways I can. Although development is satisfying in itself, open-source events like Hacktoberfest feel like a great opportunity to get recognised by the community for our contribution towards the codebase as well as other aspects of popular FOSS projects.&lt;/p&gt;

&lt;p&gt;One such major opportunity is the Google Summer of Code. GSoC is a dream of most students engaged in development as it not only provides a major motivation to commit ourselves to open source development, but it is also one of the few internship-like experiences that many students get. Like most students, I was very eager to get accepted as a contributor in GSoC, not only this year, but the last one as well. However, due to multiple reasons, I was unable to get selected then. So from early this year, I started looking into past GSoC organizations whose tech stacks and field of work matched my interests and capabilities. I eventually narrowed down my search to two organizations - The Honeynet Project and CCExtractor. Both of these orgs are doing amazing work in their respective domains - The Honeynet Project has a variety of well-developed cybersecurity tools under its umbrella, while the prowess of CCExtractor is well known in its own right.&lt;/p&gt;

&lt;p&gt;I started contributing to both these organizations from February of this year. A major mistake I'd made when trying for GSoC last year was not engaging with the project's community and maintainers and, beyond solving issues on your own, helping others solve their issues. This time, I ensured to be a part of both the projects' community from the very starting itself. I picked some issues from Github, worked on them, asked the community about the setbacks I faced and helped in some issues faced by other people. Actively engaging with the community made open source development a lot easier as well as less monotonic. I was able to solve more issues than I otherwise would have. I submitted proposals in both the organizations. Now the waiting period began...&lt;/p&gt;

&lt;p&gt;The selections of GSoC were finally announced towards the end of May, and I anxiously checked the results. I was selected as a contributor with The Honeynet Project 🎉🎉🎉&lt;/p&gt;

&lt;p&gt;The first step after the excitement phase calmed down was to look over the proposal I'd submitted and add it's sub-deadlines to my calendar. Then the coding finally began. Since this was my first time working a long project with a globally distributed community, I ran into some issues with properly communicating my progress as well as next intended steps to my GSoC mentor. However, after my first PR, my mentor, Matteo, promptly recognised the issues I was having and helped me address them. He told me where I was going wrong and how I could do better. My work began getting a lot smoother once I implemented his suggestions and I made a total of 5 major successful PRs by my mid-evaluation deadline.&lt;/p&gt;

&lt;p&gt;Here's an overview of the features I had implemented by my mid-evaluation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Allowed bulk analysis of files as well as observables, leading to a more efficient workflow for IntelOwl users. &lt;a href="https://github.com/intelowlproject/IntelOwl/pull/1032"&gt;#1032&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improved the rendering of JSON job result data &lt;a href="https://github.com/intelowlproject/IntelOwl/pull/1051"&gt;#1051&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edited &lt;code&gt;FileInfo&lt;/code&gt; analyzer to add some more potentially useful hashes. &lt;a href="https://github.com/intelowlproject/IntelOwl/pull/1073"&gt;#1073&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implemented a feature that felicitated editing of plugin parameters directly from GUI as well as setting of default parameters at organization level, thus making IntelOwl more customizable and easier to use. &lt;a href="https://github.com/intelowlproject/IntelOwl/pull/1095"&gt;#1095&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added an &lt;code&gt;extends&lt;/code&gt; field of config files. This allowed configs to extend from other similar config, thus eliminating unnecessary code duplication. &lt;a href="https://github.com/intelowlproject/IntelOwl/pull/1119"&gt;#1119&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I still felt I was going slightly wrong somewhere. My mid-evaluation report directed me to the problem. I was again not communicating enough, the very same mistake I had made last year. Due to the distributed nature of any popular FOSS project's community, it is very essential that seemingly minor implementation decisions (like the positioning of a button) should be discussed before being actually coded. This helps the community's developers to avoid writing as well as peer-reviewing unnecessary code.&lt;/p&gt;

&lt;p&gt;So, from the next PR onward, I started being much more involved in regular discussions about what I had done, and more importantly, what I was going to do. Instead of asking for the maintainers' peer review at the very end, I started pinging them whenever I felt a distinct enough sub-feature had been pushed. This allowed me to produce much more useful code in much lesser overall time.&lt;/p&gt;

&lt;p&gt;Finally we come to today's status. I'm almost done with all the issues I set out to do. I've merged one more major PRs and there's another one still in the Pending for Review list. Hopefully, it will be merged soon too. The major features introduced in these PRs are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Added "Login with Google" support to IntelOwl to allow easier and secure user onboarding. &lt;a href="https://github.com/intelowlproject/IntelOwl/pull/1129"&gt;#1129&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;[The pending one] Allowed secure editing of plugin secrets from IntelOwl's GUI. Earlier, the secrets could only be edited by manually modifying a specific file on the server. Also, implemented organization-level secrets. &lt;a href="https://github.com/intelowlproject/IntelOwl/pull/1136"&gt;#1136&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The merging of this final PR will conclude my GSoC journey. I learnt a lot more than I initially set out to. The most valuable lessons are, infact, not even technical in nature. I learnt how to better work with a team, especially one spread over various timezones.&lt;/p&gt;

&lt;p&gt;As an ending note, I would like to give a huge thanks to my mentor, Matteo Lodi. He was with me every step of the way, even when my work was not upto the mark. &lt;br&gt;
Thanks a lot Matteo :)&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>opensource</category>
      <category>beginners</category>
      <category>gsoc</category>
    </item>
  </channel>
</rss>
