<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ajit Singh</title>
    <description>The latest articles on DEV Community by Ajit Singh (@ajitsinghkaler).</description>
    <link>https://dev.to/ajitsinghkaler</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ajitsinghkaler"/>
    <language>en</language>
    <item>
      <title>Understanding TUS and imgProxy in supabse/storage</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Wed, 15 Jan 2025 08:22:35 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/understanding-tus-and-imgproxy-in-supabsestorage-2c00</link>
      <guid>https://dev.to/ajitsinghkaler/understanding-tus-and-imgproxy-in-supabsestorage-2c00</guid>
      <description>&lt;h1&gt;
  
  
  TUS and Imgproxy
&lt;/h1&gt;

&lt;p&gt;Today we will learn at how external elements like TUS and Imgproxy are used in the Supabase Storage engine. TUS is used for resumable uploads and Imgproxy is used for image transformations. Now let's dive into the details.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. TUS Protocol for Resumable Uploads
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; The TUS protocol is implemented to handle resumable uploads, particularly useful for large files or unreliable network connections. This ensures that uploads can be paused and resumed without losing progress.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key Use Cases:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Large Files:&lt;/strong&gt; When a client uploads large files through TUS, the upload is broken down into smaller, manageable chunks, allowing for resumable transfers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unreliable Networks:&lt;/strong&gt; It is used in the event of connectivity issues, as the upload can be continued after the network becomes available.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multipart Upload Abstraction:&lt;/strong&gt; For large files, TUS acts as a high-level protocol abstraction over S3's multipart uploads.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Lifecycle:&lt;/strong&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Initialization (TUS Client):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;A TUS client, such as &lt;code&gt;tus-js-client&lt;/code&gt;, is used by the client application and it initiates an upload by making a POST request to the &lt;code&gt;/upload/resumable&lt;/code&gt; endpoint.&lt;/li&gt;
&lt;li&gt;The POST request includes a &lt;code&gt;Upload-Length&lt;/code&gt; header to specify the file's total size and additional metadata about the file.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Request (&lt;code&gt;/upload/resumable&lt;/code&gt;, HTTP Layer):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Extraction:&lt;/strong&gt; The TUS middleware detects a new upload using the &lt;code&gt;Upload-Length&lt;/code&gt; and &lt;code&gt;Upload-Metadata&lt;/code&gt; headers in the request and calls the &lt;code&gt;namingFunction&lt;/code&gt;, which does the following:&lt;/li&gt;
&lt;li&gt;Parses the metadata from the header, including the &lt;code&gt;bucketName&lt;/code&gt; and &lt;code&gt;objectName&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Validates that the bucket and key are valid using &lt;code&gt;mustBeValidBucketName&lt;/code&gt; and &lt;code&gt;mustBeValidKey&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The system creates a new &lt;code&gt;UploadId&lt;/code&gt; with &lt;code&gt;tenantId&lt;/code&gt;, &lt;code&gt;bucketName&lt;/code&gt;, &lt;code&gt;objectName&lt;/code&gt;, and a &lt;code&gt;version&lt;/code&gt;, which is a UUID generated on every request.&lt;/li&gt;
&lt;li&gt;The new &lt;code&gt;UploadId&lt;/code&gt; is encoded into a base64url string and is added to the response &lt;code&gt;Location&lt;/code&gt; header to create the unique upload ID URL.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging:&lt;/strong&gt; The TUS server logs when the upload begins, as well as the request headers and parameters using Pino.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security:&lt;/strong&gt; The request also checks if the user has the authorization to create this object and the request is authenticated.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-Upload Checks (&lt;code&gt;onCreate&lt;/code&gt; hook):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;onCreate&lt;/code&gt; hook (in &lt;code&gt;src/http/routes/tus/lifecycle.ts&lt;/code&gt;) is called before the upload happens; it performs the following:&lt;/li&gt;
&lt;li&gt;The TUS upload data is saved into the file system or S3.&lt;/li&gt;
&lt;li&gt;Validates the content type if available.&lt;/li&gt;
&lt;li&gt;If the max file size exceeds the configured value, then it aborts the upload.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;onCreate&lt;/code&gt; hook returns the headers for the responses.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uploading Parts (PUT/PATCH &lt;code&gt;/upload/resumable/{uploadId}&lt;/code&gt;, TUS Layer):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Request Check:&lt;/strong&gt; Each individual PUT or PATCH request is checked against the &lt;code&gt;onIncomingRequest&lt;/code&gt; middleware, which validates if:&lt;/li&gt;
&lt;li&gt;The request is for a signed upload URL.&lt;/li&gt;
&lt;li&gt;The request has valid authentication.&lt;/li&gt;
&lt;li&gt;Validates the file size limit using the headers &lt;code&gt;Upload-Offset&lt;/code&gt;, &lt;code&gt;Upload-Length&lt;/code&gt;, or by checking the &lt;code&gt;content-length&lt;/code&gt; header, also checking the configuration or tenant limit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Handling:&lt;/strong&gt; The &lt;code&gt;TUS&lt;/code&gt; middleware (in &lt;code&gt;@tus/server&lt;/code&gt;) handles each PUT/PATCH request to upload a part or a chunk of the file.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data Storage:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;If using &lt;code&gt;S3Store&lt;/code&gt;, parts are uploaded to the S3 compatible object storage.

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;UploadPartCommand&lt;/code&gt; is used to send upload part requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tracing:&lt;/strong&gt; Each of the calls in the &lt;code&gt;S3Store&lt;/code&gt; has a trace using &lt;code&gt;ClassInstrumentation&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;If using &lt;code&gt;FileStore&lt;/code&gt;, parts are stored locally in the server's file system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics:&lt;/strong&gt; The &lt;code&gt;S3UploadPart&lt;/code&gt; histogram is used to record the time taken to complete the upload process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata Update:&lt;/strong&gt; If the used adapter is S3 storage, the &lt;code&gt;updateMultipartUploadProgress&lt;/code&gt; is used in the database to persist information about the multipart upload state.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;uploadPartCopy&lt;/code&gt; is used when a copy operation is done, and the metadata attributes are used to ensure the file was properly copied.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concurrency Control:&lt;/strong&gt; To avoid concurrency issues during multi-part uploads, a mutex is created by the &lt;code&gt;PgLocker&lt;/code&gt; class to handle the concurrent uploads and avoid race conditions.&lt;/li&gt;
&lt;li&gt;When the lock is created, a &lt;code&gt;pg_try_advisory_xact_lock&lt;/code&gt; is created using Postgres, which allows the backend to block and wait if a particular ID is being uploaded.&lt;/li&gt;
&lt;li&gt;The advisory lock ensures that concurrent uploads to the same file are done one at a time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling:&lt;/strong&gt; If any errors occur during part uploads or on the mutex locking, appropriate errors are handled, and a 4xx response is sent back; the error is also logged.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Upload Completion (PUT &lt;code&gt;/upload/resumable/{uploadId}&lt;/code&gt;, TUS Layer):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Once all parts of the file have been uploaded, the client sends a final PUT request which triggers the &lt;code&gt;onUploadFinish&lt;/code&gt; hook.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Object Creation:&lt;/strong&gt; The &lt;code&gt;completeMultipartUpload&lt;/code&gt; from the storage backend is called to assemble the parts into the final object in the file system or S3.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata Storage:&lt;/strong&gt; The system saves the file metadata into the &lt;code&gt;storage.objects&lt;/code&gt; table.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tracing:&lt;/strong&gt; A new span called &lt;code&gt;Storage.completeUpload&lt;/code&gt; is created by the &lt;code&gt;ClassInstrumentation&lt;/code&gt; plugin.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Webhook Trigger:&lt;/strong&gt; The &lt;code&gt;ObjectCreated&lt;/code&gt; event is fired using the queue.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;If there is any failure during any phase of the upload, the server will return an error to the client.&lt;/li&gt;
&lt;li&gt;If a fatal error happens, the upload will be cleaned up using the &lt;code&gt;ObjectAdminDelete&lt;/code&gt; event.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cleanup:&lt;/strong&gt; Once the upload is completed (either finished or aborted), temporary data or the upload folder will be cleaned up.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Asynchronous Nature:&lt;/strong&gt; The TUS protocol is asynchronous and allows the server to handle large uploads concurrently. When an S3 upload finishes, the system relies on a background task to clean up any remaining temporary files.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Imgproxy for Image Transformations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Imgproxy is an external service used for on-demand image processing and transformations. It allows the Supabase Storage system to serve resized, cropped, and optimized images without requiring pre-generated versions of each object.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key Use Cases:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Image Resizing:&lt;/strong&gt; When an image is requested with specified width and height parameters, imgproxy applies these transformations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Format Conversion:&lt;/strong&gt; Imgproxy is used to convert images to optimized formats such as WebP or AVIF.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Watermarks and Other Effects:&lt;/strong&gt; Can handle other image transformations such as watermarks.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Lifecycle:&lt;/strong&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Image Request (HTTP Layer):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;A client requests a particular image via the &lt;code&gt;/render/image&lt;/code&gt; or a signed URL from the &lt;code&gt;/render/sign&lt;/code&gt; endpoint with a variety of transformations specified via URL parameters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request Parameters:&lt;/strong&gt; Query parameters such as &lt;code&gt;width&lt;/code&gt;, &lt;code&gt;height&lt;/code&gt;, and &lt;code&gt;resize&lt;/code&gt; are parsed from the URL string by the &lt;code&gt;ImageRenderer&lt;/code&gt; class.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private Object URL Creation (Storage Layer):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The system calls the S3 storage backend to generate a signed presigned URL to allow &lt;code&gt;imgproxy&lt;/code&gt; to read from the bucket.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tracing:&lt;/strong&gt; A new span named &lt;code&gt;S3Backend.privateAssetUrl&lt;/code&gt; is created using the &lt;code&gt;ClassInstrumentation&lt;/code&gt; plugin.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance:&lt;/strong&gt; If there is a local file, this path is returned instead of the S3 one.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Imgproxy Request (HTTP Layer):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The system constructs a URL to the &lt;code&gt;imgproxy&lt;/code&gt; service based on transformation parameters. The URL includes parameters to configure imgproxy to perform operations such as resizing, cropping, and format conversion, as well as the signed internal URL to fetch the original image data.&lt;/li&gt;
&lt;li&gt;The HTTP client &lt;code&gt;Axios&lt;/code&gt; is used to call &lt;code&gt;imgproxy&lt;/code&gt; with the constructed URL.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tracing:&lt;/strong&gt; A new span named &lt;code&gt;axios.get&lt;/code&gt; is created with all the requested parameters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request Timeout:&lt;/strong&gt; The connection is configured to have a timeout to avoid long-running calls and also to make use of connection pooling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling:&lt;/strong&gt; In case of an error on the response, the body response is streamed and added as an error message.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Transformation (Imgproxy Service):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Imgproxy, using the signed URL, fetches the object and applies the requested transformations, storing it in memory before streaming it back.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response Streaming (HTTP Layer):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;imgproxy&lt;/code&gt; response stream (with the transformed image) is streamed back to the client. If a response is not successful, an error is raised.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caching:&lt;/strong&gt; The caching headers such as &lt;code&gt;cache-control&lt;/code&gt; and &lt;code&gt;etag&lt;/code&gt; are extracted from the original request or the backend.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tracing:&lt;/strong&gt; The &lt;code&gt;axios.get&lt;/code&gt; span will be closed, either with a status of OK or with an error.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Asynchronous Nature:&lt;/strong&gt; The integration with &lt;code&gt;imgproxy&lt;/code&gt; involves making an asynchronous network call, but the use of HTTP streams makes the process non-blocking.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Usage Details:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;TUS Workflow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A client sends a POST request to &lt;code&gt;/upload/resumable&lt;/code&gt; to initiate a new upload with the total file size.&lt;/li&gt;
&lt;li&gt;The server creates a TUS upload resource and responds with a unique upload ID (URL).&lt;/li&gt;
&lt;li&gt;The client makes multiple PUT or PATCH requests to the &lt;code&gt;Location&lt;/code&gt; URL returned in step 2, uploading the file in smaller parts.&lt;/li&gt;
&lt;li&gt;The client makes a final PUT request to the &lt;code&gt;Location&lt;/code&gt; URL; the server then stitches the parts into one final object in S3 or on disk.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Imgproxy Workflow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A client requests an image through the &lt;code&gt;/render/image&lt;/code&gt; endpoint, specifying transformation parameters in the query string.&lt;/li&gt;
&lt;li&gt;The server generates a signed URL for access to the original image in object storage using &lt;code&gt;privateAssetUrl&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The server calls &lt;code&gt;imgproxy&lt;/code&gt; with the transformation options and the signed URL.&lt;/li&gt;
&lt;li&gt;Imgproxy processes the image, and the server returns the processed image via a stream to the client.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Takeaways:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;TUS for Resumable Uploads:&lt;/strong&gt; This feature enhances file upload resilience and efficiency. It provides support for uploading big files and for unreliable connections.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Imgproxy for Dynamic Transformations:&lt;/strong&gt; This service enables flexible image transformations without altering original files and without the need for pre-generated files, saving costs and reducing complexity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Independent Services:&lt;/strong&gt; Both TUS and Imgproxy are implemented in a way that these services are external and not a core dependency for the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Streaming Data:&lt;/strong&gt; Streams are used to process large files efficiently and to have non-blocking I/O operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tracing:&lt;/strong&gt; Both of these steps are instrumented with OpenTelemetry and can be visualized by using a compatible backend.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Tenants in supabase/storage</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Wed, 15 Jan 2025 08:21:58 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/tracing-and-metrics-in-supabasestorage-5dbc</link>
      <guid>https://dev.to/ajitsinghkaler/tracing-and-metrics-in-supabasestorage-5dbc</guid>
      <description>&lt;h1&gt;
  
  
  Tenant Management in Supabase/Storage
&lt;/h1&gt;

&lt;p&gt;Let's learn how tenant data is managed within this Supabase Storage repository, focusing on how it's kept separate and how they are set up in the multi-tenant environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Tenant Data Separation Lifecycle&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This system implements a multi-tenant architecture, which means that a single instance of the application serves multiple tenants (organizations, projects) while ensuring data isolation and security. Here's the lifecycle of how tenant data is kept separate:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Tenant Creation (Admin API):&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Admin Request&lt;/strong&gt;: An administrator makes a request using the multi-tenant admin API to create a new tenant. This typically involves sending a POST request to &lt;code&gt;/admin/tenants/{tenantId}&lt;/code&gt;.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Authentication:&lt;/strong&gt; The request requires a valid admin API Key, which is validated using the &lt;code&gt;apiKey&lt;/code&gt; plugin.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Tenant Data:&lt;/strong&gt; The request body usually includes data such as:

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;tenantId&lt;/code&gt;: A unique identifier for the tenant (often a UUID).&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;anonKey&lt;/code&gt;: The anonymous public key for the tenant.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;databaseUrl&lt;/code&gt;: A specific URL to be used for a particular tenant.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;jwtSecret&lt;/code&gt;: A tenant-specific JWT secret to generate and validate JWTs.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;serviceKey&lt;/code&gt;: A service key used for the tenant to bypass row-level security for database reads and writes.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;features&lt;/code&gt;: Features for the tenant such as &lt;code&gt;imageTransformation&lt;/code&gt; and &lt;code&gt;s3Protocol&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Database Entry:&lt;/strong&gt; The request handler calls the &lt;code&gt;multitenantKnex&lt;/code&gt; client and inserts a new record in the &lt;code&gt;tenants&lt;/code&gt; table, located in the multitenant database. Note that the sensitive values are encrypted before being persisted in the database.&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Tracing:&lt;/strong&gt; If tracing is enabled for the admin server, a trace will be created named &lt;code&gt;tenants.create&lt;/code&gt; using the &lt;code&gt;ClassInstrumentation&lt;/code&gt; plugin.&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Tenant Request (HTTP Layer):&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;x-forwarded-host&lt;/code&gt; Header:&lt;/strong&gt; When a request enters the main Storage service, the &lt;code&gt;tenantId&lt;/code&gt; plugin checks for the &lt;code&gt;x-forwarded-host&lt;/code&gt; header. This header indicates the specific tenant to whom the request belongs. If the header is not provided, a 400 error is returned.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;RegExp Extraction&lt;/strong&gt;: The &lt;code&gt;REQUEST_X_FORWARDED_HOST_REGEXP&lt;/code&gt; configuration is used to extract a valid tenant ID from the header; if the header doesn't match a regexp, a 400 error is thrown.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Tenant Id&lt;/strong&gt;: The extracted tenant ID is stored in &lt;code&gt;request.tenantId&lt;/code&gt; for use in subsequent operations.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Context&lt;/strong&gt;: Every request has the &lt;code&gt;tenantId&lt;/code&gt; available in the request object that can be used to retrieve the data for a particular tenant.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Database Connection&lt;/strong&gt;: When a database connection is created, the multitenant database configuration is used by &lt;code&gt;multitenantKnex&lt;/code&gt;, which in turn allows for database connection pooling at the application server level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tenant-specific database pool:&lt;/strong&gt; Every request will create a tenant-specific database pool, which is also used for each database operation.&lt;/li&gt;
&lt;li&gt;In case a database connection pool is set for a given tenant, each request for that tenant will use its own database connection pool.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Tenant Configuration Lookup (Internal Layer):&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Cache&lt;/strong&gt;: The &lt;code&gt;getTenantConfig&lt;/code&gt; function first tries to read from a cache, identified using the tenantId from the request. If the entry is not found, it will continue to the next step.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Database Fetch&lt;/strong&gt;: The system uses the tenant ID to query the &lt;code&gt;tenants&lt;/code&gt; table in the multi-tenant database and fetch the configuration for that tenant.

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Mutual Exclusion&lt;/strong&gt;: To prevent multiple simultaneous lookups that could lead to thundering herd issues, a mutex is implemented using the &lt;code&gt;createMutexByKey&lt;/code&gt; function. This ensures that only one request can perform the lookup at a time, thereby reducing contention and improving efficiency.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Data Decryption:&lt;/strong&gt; The data, including the &lt;code&gt;databaseURL&lt;/code&gt;, &lt;code&gt;serviceKey&lt;/code&gt;, &lt;code&gt;jwtSecret&lt;/code&gt;, etc., are decrypted using the configured encryption key. This ensures that no sensitive data is visible in the database.&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;In-memory cache&lt;/strong&gt;: The tenant config, once obtained, is saved in the in-memory cache for fast access in the next requests that come for the same tenant.&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Tracing:&lt;/strong&gt; Every time the tenant is fetched, if tracing is enabled, a span named &lt;code&gt;tenant.fetch&lt;/code&gt; is created with the tenantId.&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Pooling:&lt;/strong&gt; The retrieved configuration is used to configure the database connection, or if set, the database connection pool.&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Resource Handling with Tenant Specific Context (Storage, Database Layer):&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Tenant Specific Connections&lt;/strong&gt;: The created &lt;code&gt;TenantConnection&lt;/code&gt; from &lt;code&gt;getPostgresConnection&lt;/code&gt; sets up the connection with the tenant data and also the custom settings, which allow using RLS with the &lt;code&gt;setScope&lt;/code&gt; method.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;RLS Authorization&lt;/strong&gt;: Once an object is created or read in the &lt;code&gt;storage&lt;/code&gt; layer, the underlying SQL calls will automatically respect RLS policies.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Scoped Operations:&lt;/strong&gt; The operations within &lt;code&gt;Storage&lt;/code&gt;, &lt;code&gt;ObjectStorage&lt;/code&gt;, and the internal &lt;code&gt;database&lt;/code&gt; operate within the tenant's context using the &lt;code&gt;tenantId&lt;/code&gt;, ensuring the correct credentials and settings are used for the given resources.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;S3 Credentials Management&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3 Credentials&lt;/strong&gt;: Each tenant can also have a specific S3 key and secret, which is managed in the &lt;code&gt;tenants_s3_credentials&lt;/code&gt; table.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unique Access Key&lt;/strong&gt;: Each credential for the tenant is tied to a unique access key.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rotation&lt;/strong&gt;: These credentials can be rotated when creating new credentials for a particular tenant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access Restrictions&lt;/strong&gt;: The service key or anon key cannot be used to sign a request to a particular tenant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caching&lt;/strong&gt;: The &lt;code&gt;getS3CredentialsByAccessKey&lt;/code&gt; caches all the S3 credentials in memory for a specific amount of time.&lt;/li&gt;
&lt;li&gt;If there is an update or deletion, the cache is automatically invalidated via a PubSub listener in &lt;code&gt;listenForTenantUpdate&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Scope&lt;/strong&gt;: The &lt;code&gt;getS3CredentialsByAccessKey&lt;/code&gt; will also return a &lt;code&gt;claims&lt;/code&gt; payload that can be used to create a scoped JWT token.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Key Elements for Data Isolation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Multi-Tenant Database:&lt;/strong&gt; All tenant-specific configuration information such as database URLs, JWT secrets, and service keys is managed in a centralized multi-tenant database, separated from the actual storage data.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Request Context:&lt;/strong&gt; The tenant ID and configuration are part of the request context, which is used for all subsequent operations.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;In-Memory Cache:&lt;/strong&gt; Frequent tenant configurations are cached in memory for quick retrieval.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data Encryption:&lt;/strong&gt; Sensitive data, such as passwords and secrets, are encrypted at rest in the multi-tenant database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How It Works in Practice:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tenant Setup:&lt;/strong&gt; An administrator creates a new tenant using the multi-tenant API, which stores the tenant configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Client Request:&lt;/strong&gt; A client attempts to access a resource, providing a JWT or presigned S3 token in the Authorization header, or using an upload signed URL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tenant Identification:&lt;/strong&gt; The system extracts the tenant ID from the request using the &lt;code&gt;x-forwarded-host&lt;/code&gt; header sent by the load balancer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuration Lookup:&lt;/strong&gt; The &lt;code&gt;getTenantConfig&lt;/code&gt; function retrieves the specific configuration for the tenant, including the database URL, secrets, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Database Connection:&lt;/strong&gt; A database connection is made using the tenant-specific configuration, including tenant-specific database pool options if any.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;RLS Enforcement:&lt;/strong&gt; Queries against the storage database use the custom functions &lt;code&gt;auth.uid&lt;/code&gt; and &lt;code&gt;auth.role&lt;/code&gt; using the extracted JWT from the header.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Response Generation:&lt;/strong&gt; The system responds to the client with the requested data, ensuring data isolation and security at all stages.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is how the multi-tenant application achieves strong data isolation.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deployement in supabase storage</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Wed, 15 Jan 2025 08:21:12 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/deployement-in-supabase-storage-1mhf</link>
      <guid>https://dev.to/ajitsinghkaler/deployement-in-supabase-storage-1mhf</guid>
      <description>&lt;h1&gt;
  
  
  Supabase Storage how deployment works
&lt;/h1&gt;

&lt;p&gt;Let's break down the deployment aspects of this Supabase Storage repository, focusing on how the various tools connect in the &lt;code&gt;docker-compose.yml&lt;/code&gt; files, their interactions. We'll try to cover the full operations side of the repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Deployment Overview&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The repository provides several Docker Compose files to manage different deployment scenarios, primarily focusing on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker-compose.yml&lt;/code&gt;&lt;/strong&gt;: This file sets up a single-tenant environment with a PostgreSQL database, a connection pooler (pgBouncer), a MinIO S3-compatible storage, and the Storage API itself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;docker-compose-multi-tenant.yml&lt;/code&gt;&lt;/strong&gt;: This file sets up a multi-tenant environment, adding a multi-tenant database, Supavisor (a connection pooler and proxy for multi-tenant Postgres), MinIO, and the Storage API (configured for multi-tenancy).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;./.docker/docker-compose-infra.yml&lt;/code&gt;&lt;/strong&gt;: Defines the basic infrastructure components shared between the single-tenant and multi-tenant setups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;./.docker/docker-compose-monitoring.yml&lt;/code&gt;&lt;/strong&gt;: Defines the basic monitoring components shared between the single-tenant and multi-tenant setups.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Component Interaction in &lt;code&gt;docker-compose.yml&lt;/code&gt; (Single-Tenant):&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here's a breakdown of the services defined in &lt;code&gt;docker-compose.yml&lt;/code&gt; and their interactions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;storage&lt;/code&gt; (Supabase Storage API):&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image:&lt;/strong&gt; &lt;code&gt;supabase/storage-api:latest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; This is the core service that provides the Storage API, handling object storage logic, auth, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ports:&lt;/strong&gt; Exposes port &lt;code&gt;5000&lt;/code&gt; for incoming HTTP requests.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dependencies:&lt;/strong&gt; Depends on &lt;code&gt;tenant_db&lt;/code&gt;, &lt;code&gt;pg_bouncer&lt;/code&gt;, and &lt;code&gt;minio_setup&lt;/code&gt;. It requires the tenant database to be ready and migrations to be run, also requires a bucket on the S3 compatible provider before starting; that's why &lt;code&gt;minio_setup&lt;/code&gt; is a dependency here.&lt;br&gt;
    &lt;br&gt;
    Environment Variables&lt;br&gt;
    &lt;/p&gt;
&lt;ul&gt;

        &lt;li&gt;
&lt;code&gt;SERVER_PORT: 5000&lt;/code&gt;: the exposed port on the container&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;AUTH_JWT_SECRET&lt;/code&gt;: JWT secret to validate requests&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;AUTH_JWT_ALGORITHM&lt;/code&gt;: Algorithm to use with the JWT library&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;DATABASE_URL&lt;/code&gt;: Connection URL to PostgreSQL&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;DATABASE_POOL_URL&lt;/code&gt;: Connection URL to &lt;code&gt;pgBouncer&lt;/code&gt; connection pooler&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;DB_INSTALL_ROLES: true&lt;/code&gt;: Indicates if roles need to be installed on the database (if it is set to &lt;code&gt;false&lt;/code&gt;, it needs to be managed outside of the application)&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;STORAGE_BACKEND: s3&lt;/code&gt;: which type of backend to use (&lt;code&gt;s3&lt;/code&gt; or &lt;code&gt;file&lt;/code&gt;)&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;STORAGE_S3_BUCKET&lt;/code&gt;: Name of the S3 bucket to use&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;STORAGE_S3_ENDPOINT&lt;/code&gt;: S3 endpoint to use&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;STORAGE_S3_FORCE_PATH_STYLE: true&lt;/code&gt;: If true, uses the path instead of subdomains&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;STORAGE_S3_REGION&lt;/code&gt;: S3 bucket region&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; and &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt;: MinIO credentials&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;UPLOAD_FILE_SIZE_LIMIT&lt;/code&gt;, &lt;code&gt;UPLOAD_FILE_SIZE_LIMIT_STANDARD&lt;/code&gt;: limits for file uploads&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;TUS_URL_PATH&lt;/code&gt;, &lt;code&gt;TUS_URL_EXPIRY_MS&lt;/code&gt;: TUS settings&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;IMAGE_TRANSFORMATION_ENABLED: "true"&lt;/code&gt;: Enables or disables image transformations&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;IMGPROXY_URL&lt;/code&gt;, &lt;code&gt;IMGPROXY_REQUEST_TIMEOUT&lt;/code&gt;: Configuration for the imgproxy URL and timeout&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;S3_PROTOCOL_ACCESS_KEY_ID&lt;/code&gt; and &lt;code&gt;S3_PROTOCOL_ACCESS_KEY_SECRET&lt;/code&gt;: If the access key or secret are set on the request, those values will be used instead of the static &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt;, &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt;
&lt;/li&gt;

        &lt;li&gt;
&lt;code&gt;S3_PROTOCOL_ALLOWS_SERVICE_KEY_AS_SECRET&lt;/code&gt;: if true, allows using service keys as secrets&lt;/li&gt;

    &lt;/ul&gt;

    &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Functionality:&lt;/strong&gt; This is the core of the storage engine; it uses various libraries from the repo, in particular:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The database classes such as &lt;code&gt;TenantConnection&lt;/code&gt; to connect to the database&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;Storage&lt;/code&gt; class to interact with the database and the underlying storage&lt;/li&gt;
&lt;li&gt;All the auth validations using &lt;code&gt;jwt&lt;/code&gt; and &lt;code&gt;apikey&lt;/code&gt; plugins&lt;/li&gt;
&lt;li&gt;Registers all the TUS, object, bucket, S3, and render HTTP endpoints&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;tenant_db&lt;/code&gt; (PostgreSQL Database):&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image:&lt;/strong&gt; &lt;code&gt;postgres:15&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Provides the PostgreSQL database that stores the buckets, objects, and file metadata&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ports:&lt;/strong&gt; Exposes port &lt;code&gt;5432&lt;/code&gt; for database access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthcheck:&lt;/strong&gt; Check if the service is healthy using &lt;code&gt;pg_isready&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;POSTGRES_DB&lt;/code&gt;, &lt;code&gt;POSTGRES_USER&lt;/code&gt;, &lt;code&gt;POSTGRES_PASSWORD&lt;/code&gt;: PostgreSQL credentials&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;pg_bouncer&lt;/code&gt; (pgBouncer Connection Pooler):&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image:&lt;/strong&gt; &lt;code&gt;bitnami/pgbouncer:latest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Manages a pool of connections to the PostgreSQL database. This helps in preventing database connection overload when a large number of connections come at once and also allows for transaction-level pooling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ports:&lt;/strong&gt; Exposes port &lt;code&gt;6432&lt;/code&gt; for connections from the &lt;code&gt;storage&lt;/code&gt; service&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;POSTGRESQL_USERNAME&lt;/code&gt;, &lt;code&gt;POSTGRESQL_HOST&lt;/code&gt;, &lt;code&gt;POSTGRESQL_PASSWORD&lt;/code&gt;: PostgreSQL connection details&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PGBOUNCER_POOL_MODE&lt;/code&gt;: Sets the pool mode to transaction, which is recommended for serverless or edge environments&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PGBOUNCER_IGNORE_STARTUP_PARAMETERS&lt;/code&gt;: List of parameters to ignore on startup&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PGBOUNCER_STATS_USERS&lt;/code&gt;: Users that are able to query stats&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;minio&lt;/code&gt; (MinIO S3-Compatible Storage):&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image:&lt;/strong&gt; &lt;code&gt;minio/minio&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Provides a local S3-compatible object storage that the Storage API will use&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ports:&lt;/strong&gt; Exposes ports &lt;code&gt;9000&lt;/code&gt; (API) and &lt;code&gt;9001&lt;/code&gt; (console)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthcheck:&lt;/strong&gt; Check if the service is healthy via TCP connection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;MINIO_ROOT_USER&lt;/code&gt;, &lt;code&gt;MINIO_ROOT_PASSWORD&lt;/code&gt;: MinIO credentials&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Command:&lt;/strong&gt; &lt;code&gt;server --console-address ":9001" /data&lt;/code&gt; starts the server in the defined port, exposing the console and also where the files are located&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;minio_setup&lt;/code&gt; (MinIO Setup):&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image:&lt;/strong&gt; &lt;code&gt;minio/mc&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Configures MinIO by setting up an alias and creating a bucket&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies:&lt;/strong&gt; Depends on &lt;code&gt;minio&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Entrypoint:&lt;/strong&gt; Runs the &lt;code&gt;mc&lt;/code&gt; command to create a bucket in MinIO&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;imgproxy&lt;/code&gt;:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image:&lt;/strong&gt; &lt;code&gt;darthsim/imgproxy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Handles image transformation using the imgproxy project&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ports:&lt;/strong&gt; Exposes port &lt;code&gt;8080&lt;/code&gt; for incoming requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volumes:&lt;/strong&gt; The local folder &lt;code&gt;data&lt;/code&gt; is mapped to the Docker folder &lt;code&gt;/images/data&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;IMGPROXY_WRITE_TIMEOUT&lt;/code&gt; and &lt;code&gt;IMGPROXY_READ_TIMEOUT&lt;/code&gt;: Sets the read and write timeout for image operations&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;IMGPROXY_REQUESTS_QUEUE_SIZE&lt;/code&gt;: The requests to the queue size&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;IMGPROXY_LOCAL_FILESYSTEM_ROOT&lt;/code&gt;: The root folder where images are stored&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;IMGPROXY_USE_ETAG&lt;/code&gt;: Enable the usage of ETag when available&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;IMGPROXY_ENABLE_WEBP_DETECTION&lt;/code&gt;: Enable detection of WebP format images&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Component Interaction in &lt;code&gt;docker-compose-multi-tenant.yml&lt;/code&gt; (Multi-Tenant):&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This configuration includes all of the above plus additional services for multi-tenancy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;multitenant_db&lt;/code&gt; (PostgreSQL Database):&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image:&lt;/strong&gt; &lt;code&gt;postgres:15&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Manages the database schema for multiple tenants. It stores metadata and configuration information for each tenant&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ports:&lt;/strong&gt; Exposes port &lt;code&gt;5433&lt;/code&gt; for database access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthcheck:&lt;/strong&gt; Check if the service is healthy using &lt;code&gt;pg_isready&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;POSTGRES_DB&lt;/code&gt;, &lt;code&gt;POSTGRES_USER&lt;/code&gt;, &lt;code&gt;POSTGRES_PASSWORD&lt;/code&gt;: PostgreSQL credentials&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Configs:&lt;/strong&gt; Loads the SQL schema on &lt;code&gt;/docker-entrypoint-initdb.d/init.sql&lt;/code&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;supavisor&lt;/code&gt; (Supavisor Connection Pooler and Proxy):&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image:&lt;/strong&gt; &lt;code&gt;supabase/supavisor:latest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Acts as a connection pooler and proxy for routing database connections in a multi-tenant setup. It will have a pool of connections for each tenant&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ports:&lt;/strong&gt; Exposes ports &lt;code&gt;4000&lt;/code&gt; (API), &lt;code&gt;5452&lt;/code&gt; (session), and &lt;code&gt;6543&lt;/code&gt; (transaction)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies:&lt;/strong&gt; &lt;code&gt;multitenant_db&lt;/code&gt;, &lt;code&gt;tenant_db&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Healthcheck:&lt;/strong&gt; Check if the service is healthy by pinging &lt;code&gt;http://localhost:4000/api/health&lt;/code&gt;&lt;br&gt;
    &lt;br&gt;
    Environment&lt;br&gt;
    - &lt;code&gt;PORT&lt;/code&gt;, &lt;code&gt;PROXY_PORT_SESSION&lt;/code&gt;, &lt;code&gt;PROXY_PORT_TRANSACTION&lt;/code&gt;: Supavisor port configuration&lt;br&gt;&lt;br&gt;
    - &lt;code&gt;DATABASE_URL&lt;/code&gt;: Connection URL to the multi-tenant database&lt;br&gt;&lt;br&gt;
    - &lt;code&gt;CLUSTER_POSTGRES: "true"&lt;/code&gt;: Enable clustered PostgreSQL configuration&lt;br&gt;&lt;br&gt;
    - &lt;code&gt;SECRET_KEY_BASE&lt;/code&gt;, &lt;code&gt;VAULT_ENC_KEY&lt;/code&gt;, &lt;code&gt;API_JWT_SECRET&lt;/code&gt;, &lt;code&gt;METRICS_JWT_SECRET&lt;/code&gt;, &lt;code&gt;REGION&lt;/code&gt;: Other environment variables needed for the application&lt;br&gt;&lt;br&gt;
    &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Command:&lt;/strong&gt; Migrates the Supavisor data structure with &lt;code&gt;/app/bin/migrate&lt;/code&gt; and then&lt;br&gt;
 starts the server with &lt;code&gt;/app/bin/server&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;supavisor_setup&lt;/code&gt; (Supavisor Setup):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image:&lt;/strong&gt; &lt;code&gt;supabase/supavisor:latest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Sets up an initial tenant in the Supavisor database&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies:&lt;/strong&gt; Depends on &lt;code&gt;supavisor&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Command:&lt;/strong&gt; Creates the tenant inside the Supavisor database via an HTTP PUT request&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Role of MinIO, pgBouncer, and Supavisor&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MinIO:&lt;/strong&gt; Acts as a local object storage system that is used by the Storage Service; it allows easy setup and use without the need for a cloud provider. The Storage Service uses a local bucket &lt;code&gt;supa-storage-bucket&lt;/code&gt; as a default to store files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pgBouncer:&lt;/strong&gt; Reduces the number of direct connections to the PostgreSQL database using connection pooling, which helps in preventing database overload and allows for maintaining a healthy database connection. It also enables transaction-level pooling. This is a single connection pooler, which is used only for the storage application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supavisor:&lt;/strong&gt; Supavisor is a multi-tenant connection pooler that connects to the multi-tenant database and acts as a proxy for all the tenant databases. It uses a database to handle connection pooling and manage tenants.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Monitoring&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The repository also contains additional configurations and tooling that are important for monitoring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;.docker/docker-compose-monitoring.yml&lt;/code&gt;:&lt;/strong&gt; This file sets up monitoring tools such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pg_bouncer_exporter&lt;/code&gt;: Exports metrics from pgBouncer&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;postgres_exporter&lt;/code&gt;: Exports metrics from PostgreSQL&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;prometheus&lt;/code&gt;: Collects all the metrics from services like storage, PostgreSQL, and Supavisor&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;grafana&lt;/code&gt;: Used for visualizing collected data from Prometheus, and it has a set of dashboards that you can use for monitoring storage, PostgreSQL, and Supavisor&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;jaeger&lt;/code&gt;: This is a collector for traces that are created on the application&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;otel-collector&lt;/code&gt;: This is a collector for traces&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;.github/workflows/&lt;/code&gt;:&lt;/strong&gt; This directory includes GitHub Actions workflow to automate build, test, release, deployment, and documentation tasks&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;.releaserc&lt;/code&gt;:&lt;/strong&gt; This file defines release configurations for &lt;code&gt;semantic-release&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Deployment Process Summary&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Local Development:&lt;/strong&gt; For local development &lt;code&gt;docker-compose-infra.yml&lt;/code&gt; is used to create all the shared services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production Deployment:&lt;/strong&gt; In production, the &lt;code&gt;storage&lt;/code&gt; service should be deployed in a cloud environment using a database and an S3-compatible object store.

&lt;ul&gt;
&lt;li&gt;The Docker images used for the deployments are built using the workflow defined on &lt;code&gt;.github/workflows/release.yml&lt;/code&gt;, which are then published to Docker Hub and GHCR.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mirror.yml&lt;/code&gt; is used to mirror the newly published versions on different providers.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Tenant Configuration:&lt;/strong&gt; When setting up multi-tenancy:

&lt;ul&gt;
&lt;li&gt;Create a multitenant database.&lt;/li&gt;
&lt;li&gt;Set up a Supavisor instance.&lt;/li&gt;
&lt;li&gt;Use the admin API to create a new tenant specifying all configurations needed.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;infra&lt;/code&gt; scripts can be used to restart and deploy the basic infrastructure.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;docker-compose-monitoring.yml&lt;/code&gt; is used to create the monitoring services.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>How migrations work in supabase/storage</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Wed, 15 Jan 2025 08:18:19 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/how-migrations-work-in-supabasestorage-4dpk</link>
      <guid>https://dev.to/ajitsinghkaler/how-migrations-work-in-supabasestorage-4dpk</guid>
      <description>&lt;h1&gt;
  
  
  Supabse storage migrations
&lt;/h1&gt;

&lt;p&gt;Let's learn about database migrations in this Supabase Storage repository, covering both single-tenant and multi-tenant setups, and exploring the three different migration strategies: "on request," "progressive," and "full fleet."&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Single-Tenant Migrations&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In a single-tenant setup, we have one admin application interacting with a single database. Here's how migrations are handled:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Migration Files:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  Migrations are stored in the &lt;code&gt;migrations/tenant&lt;/code&gt; directory.

&lt;ul&gt;
&lt;li&gt;  Each migration file is a SQL file and has a numbered prefix, e.g., &lt;code&gt;0001-initialmigration.sql&lt;/code&gt;, &lt;code&gt;0002-storage-schema.sql&lt;/code&gt;, that indicates in which order they should be applied.

&lt;ul&gt;
&lt;li&gt;  They may contain operations to create tables, indices, change columns, or define functions, etc.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;&lt;code&gt;internal/database/migrations/types.ts&lt;/code&gt;:&lt;/strong&gt; This file defines a mapping of the different migration names with their associated id.&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;runMigrationsOnTenant&lt;/code&gt; Function (src/internal/database/migrations/migrate.ts):&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Connection:&lt;/strong&gt; This function creates a database connection by using the &lt;code&gt;databaseUrl&lt;/code&gt; and creates a &lt;code&gt;Client&lt;/code&gt; instance from &lt;code&gt;pg&lt;/code&gt; library.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;connectAndMigrate&lt;/code&gt; Function:&lt;/strong&gt; This helper function is invoked with the database configuration and path for the migrations, and it takes care of applying the migrations to that given database.

&lt;ul&gt;
&lt;li&gt;  It reads all the files in a particular directory using &lt;code&gt;loadMigrationFilesCached&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  If the &lt;code&gt;migrations&lt;/code&gt; table is not present in the database it creates it.&lt;/li&gt;
&lt;li&gt;  It reads the applied migrations using the &lt;code&gt;migrations&lt;/code&gt; table.

&lt;ul&gt;
&lt;li&gt;  If there are applied migrations, it refreshes the migration position. This is for backport compatibility.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  It compares the applied migrations with the available migrations in a directory using &lt;code&gt;validateMigrationHashes&lt;/code&gt;, and if there are some issues then it will throw an error and stop migrations unless &lt;code&gt;dbRefreshMigrationHashesOnMismatch&lt;/code&gt; option is set to true, this will update migration hashes on the &lt;code&gt;migrations&lt;/code&gt; table if there is a mismatch.&lt;/li&gt;

&lt;li&gt;  It will filter migrations that are pending to be run using &lt;code&gt;filterMigrations&lt;/code&gt;.&lt;/li&gt;

&lt;li&gt;  For each pending migration, the system runs &lt;code&gt;runMigration&lt;/code&gt; with the SQL file that will perform the schema updates.&lt;/li&gt;

&lt;li&gt;  It sets the &lt;code&gt;search_path&lt;/code&gt; for every session, by using &lt;code&gt;SET search_path TO ...&lt;/code&gt;.&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;How It's Triggered:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  When the main storage application starts up (&lt;code&gt;src/start/server.ts&lt;/code&gt;) the function &lt;code&gt;runMigrationsOnTenant&lt;/code&gt; is called using the &lt;code&gt;databaseURL&lt;/code&gt;.

&lt;ul&gt;
&lt;li&gt;  On every request the migrations are verified to be up to date, if not, the &lt;code&gt;db-init&lt;/code&gt; will trigger the method to make sure the database is up to date.

&lt;ul&gt;
&lt;li&gt;  This is only enabled when the application is not in multi-tenant mode.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Multi-Tenant Migrations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a multi-tenant setup, we have multiple tenants sharing the same application instance, each with its own data but potentially sharing some database schemas. Here's how migrations are handled:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Multi-Tenant Database Migrations:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Files:&lt;/strong&gt;  These migrations are stored in the &lt;code&gt;migrations/multitenant&lt;/code&gt; directory and they have the same schema as single-tenant migrations.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Purpose:&lt;/strong&gt; These migrations are used for database changes that affect the multi-tenant database schema.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Execution:&lt;/strong&gt; The &lt;code&gt;runMultitenantMigrations&lt;/code&gt; function is called when the server starts to apply changes to the database which is used to manage the different tenants (&lt;code&gt;docker-compose-multi-tenant.yml&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Connection&lt;/strong&gt;: The function uses the &lt;code&gt;multitenantKnex&lt;/code&gt; database client to connect to the multitenant database.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Tenant-Specific Migrations:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Tenant &lt;code&gt;migrations&lt;/code&gt; Table:&lt;/strong&gt; Each tenant has their own private database schema and the system tracks migration state in a &lt;code&gt;migrations&lt;/code&gt; table for every tenant.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;runMigrationsOnTenant&lt;/code&gt; Function:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  It uses the &lt;code&gt;databaseUrl&lt;/code&gt; from each tenant to connect to each database.&lt;/li&gt;
&lt;li&gt;  The process from applying those migrations is identical to the single tenant migrations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Tenant Tracking&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;  The &lt;code&gt;tenants&lt;/code&gt; table in the multi-tenant database, holds the &lt;code&gt;migrations_version&lt;/code&gt; and &lt;code&gt;migrations_status&lt;/code&gt; which indicates up to what migration has been applied to a particular tenant and if the migration was successful or not.&lt;/li&gt;
&lt;li&gt;  The system uses the &lt;code&gt;cursor_id&lt;/code&gt; to paginate through the available tenants.&lt;/li&gt;
&lt;li&gt;  The &lt;code&gt;listTenantsToMigrate&lt;/code&gt; returns an async iterator that lists all the tenants where &lt;code&gt;migrations_status&lt;/code&gt; is not set to &lt;code&gt;"COMPLETED"&lt;/code&gt; or &lt;code&gt;migrations_version&lt;/code&gt; is different than the latest migration.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Migration Strategies (Multi-Tenant)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In a multi-tenant environment, the repository uses three different strategies to apply pending migrations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;MultitenantMigrationStrategy.ON_REQUEST&lt;/code&gt; (On Request):&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;How it works:&lt;/strong&gt;
    -   Every request is checked using the &lt;code&gt;db&lt;/code&gt; plugin for pending migrations using the tenant information present on the request.
    -   The plugin uses the function &lt;code&gt;hasMissingSyncMigration&lt;/code&gt; which checks if the migrations have been completed or not on the database, before proceeding with the request.
    -   If the migration status is not set to &lt;code&gt;COMPLETED&lt;/code&gt; or &lt;code&gt;migrations_version&lt;/code&gt; is not the latest then all pending migrations will be executed for the specific tenant. This operation happens every time an application does a database query for this specific tenant.
    -   To avoid concurrent migrations on the same tenant the migrations use &lt;code&gt;createMutexByKey&lt;/code&gt; to execute the migration sequentially and only once per tenant at a time.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Use Case:&lt;/strong&gt; This is useful if you want every request to have all the migrations applied, and you want to prioritize safety instead of performance.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;MultitenantMigrationStrategy.PROGRESSIVE&lt;/code&gt; (Progressive):&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;How it Works:&lt;/strong&gt;
    -   The &lt;code&gt;progressiveMigrations&lt;/code&gt; class (in &lt;code&gt;src/internal/database/migrations/progressive.ts&lt;/code&gt;) is configured to run in a loop at a specific interval checking for any tenants that need to be migrated, this check will be done in the background on a specific server.
    -   If a new tenant is added, or there is a new migration available, then the tenant will be added to a list which is used by &lt;code&gt;createJobs&lt;/code&gt; to perform the migrations asynchronously.
    -   If a request is made and it has pending migrations, the plugin will trigger the migration immediately using the &lt;code&gt;runMigrationsOnTenant&lt;/code&gt; and use &lt;code&gt;updateTenantMigrationsState&lt;/code&gt; to set the migrations as completed, in this case, the migrations will block the requests, so you don't have a version inconsistency.
        -   It checks on each request if there is a SYNC migration pending, if there is, it means that a migration will have to be run during the main thread.
    -   The list is limited by the &lt;code&gt;maxSize&lt;/code&gt; property which if reached will trigger the creation of migration jobs.
    -   The background task will use the list of tenants and use the &lt;code&gt;RunMigrationsOnTenants&lt;/code&gt; class to send messages to the queue for each tenant that has pending migrations. The queue workers will then pick those jobs and apply the required migrations.
    -   In this case the main request will be blocked until the migrations are completed.
    -   The system will set the status to failed if the migrations failed multiple times.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Use Case:&lt;/strong&gt; Suitable for applications that want to apply migrations in the background.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;MultitenantMigrationStrategy.FULL_FLEET&lt;/code&gt; (Full Fleet):&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;How it Works:&lt;/strong&gt;
    -   The &lt;code&gt;runMigrationsOnAllTenants&lt;/code&gt; is called asynchronously on start of the application.
    -   It uses the advisory locks to prevent concurrent executions.
    -   It iterates through the tenants that require a migration using &lt;code&gt;listTenantsToMigrate&lt;/code&gt; and sends a queue message to the &lt;code&gt;RunMigrationsOnTenants&lt;/code&gt; queue using &lt;code&gt;RunMigrationsOnTenants.batchSend&lt;/code&gt;, so each tenant migration is performed by the queue workers asynchronously.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Use Case:&lt;/strong&gt; This method is useful if you have a large fleet of tenants, and you want the migrations to be handled asynchronously.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Code Flow&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Initial Setup:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  On the application startup, depending if it is a multi-tenant environment or not, the migration strategy will be determined.&lt;/li&gt;
&lt;li&gt;  Single tenant migrations are run when the server boots up using &lt;code&gt;runMigrationsOnTenant&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  Multi-tenant migration are run in the multi tenant database. &lt;code&gt;runMultitenantMigrations&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  If the strategy is set to &lt;code&gt;PROGRESSIVE&lt;/code&gt; or &lt;code&gt;FULL_FLEET&lt;/code&gt;, an async migration process will be started in the background using &lt;code&gt;startAsyncMigrations&lt;/code&gt;, to continue to migrate all the tenants while the application is running.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Request Processing (Multi-Tenant):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  When a request is made, depending on the selected strategy, one of three possible approaches to trigger the migrations is used.&lt;/li&gt;
&lt;li&gt;  If migrations are run on request, then the request processing will be blocked while migrations are being executed.&lt;/li&gt;
&lt;li&gt;  If migrations are run in a &lt;code&gt;PROGRESSIVE&lt;/code&gt; mode, and there is a pending &lt;code&gt;---SYNC---&lt;/code&gt; migration, then the requests are also blocked until migrations are done, else they will be queued to run in background.&lt;/li&gt;
&lt;li&gt;  The system uses the &lt;code&gt;tenantId&lt;/code&gt; to verify if migrations have been done in the current tenant.&lt;/li&gt;
&lt;li&gt;  If &lt;code&gt;FULL_FLEET&lt;/code&gt; is being used, then a queue is used to migrate the tenants in background.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Database Updates:&lt;/strong&gt; All changes are performed to the specific tenant's database, and the migrations info on the &lt;code&gt;tenants&lt;/code&gt; table are updated.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Async Process&lt;/strong&gt;: In the progressive case, the background process will send messages to the queue so that worker instances can pick them up.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Single vs. Multi-Tenant:&lt;/strong&gt;  Single-tenant deployments use a straightforward approach, running migrations on server start. Multi-tenant deployments are more flexible, allowing migrations on each request, progressively, or in full fleet.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Flexibility and Control:&lt;/strong&gt; Three different types of migration strategies are offered to fit different environments.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data Integrity:&lt;/strong&gt;  The use of version tracking helps ensure the integrity of the schema migrations.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Automated Migration:&lt;/strong&gt; This system uses a combination of row level security and migrations to enforce the structure of the data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This should provide a solid understanding of how migrations are handled in single and multi-tenant modes.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Tracing and Metrics in supabase/storage</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Tue, 14 Jan 2025 07:59:35 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/tracing-and-metrics-in-supabasestorage-1mk</link>
      <guid>https://dev.to/ajitsinghkaler/tracing-and-metrics-in-supabasestorage-1mk</guid>
      <description>&lt;h1&gt;
  
  
  Tracing and Metrics
&lt;/h1&gt;

&lt;p&gt;Lets learn more about how tracing and metrics are collected and used within the system. From a request perspectivewe will check they are added and implemented.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tracing Lifecycle
&lt;/h2&gt;

&lt;p&gt;Tracing is used to observe the path a request takes through the system, providing a holistic view of the processing flow, including execution times, dependencies, and errors along the way. Here’s a detailed breakdown:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Request Ingress (HTTP Layer):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Instrumentation&lt;/strong&gt;: OpenTelemetry (OTEL) &lt;code&gt;HttpInstrumentation&lt;/code&gt; automatically creates a new root span when a request comes in. The span's name is &lt;code&gt;http.request&lt;/code&gt;. The span's context (traceId, spanId) can be accessed through the &lt;code&gt;trace.getActiveSpan()&lt;/code&gt; function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attributes&lt;/strong&gt;: The instrumentation also sets relevant attributes to the span including the tenant id (if available), method, url, status code, route, headers, and more, based on &lt;code&gt;applyCustomAttributesOnSpan&lt;/code&gt;, &lt;code&gt;headersToSpanAttributes&lt;/code&gt; configuration in the &lt;code&gt;HttpInstrumentation&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging&lt;/strong&gt;: The &lt;code&gt;logRequest&lt;/code&gt; plugin is executed, capturing the request information in a log line.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context Propagation&lt;/strong&gt;: The span's context (including &lt;code&gt;traceId&lt;/code&gt; and &lt;code&gt;spanId&lt;/code&gt;) is attached to the current asynchronous execution context, ensuring consistent propagation to downstream operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Authentication and Authorization (HTTP Layer):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plugin&lt;/strong&gt;: The &lt;code&gt;jwt&lt;/code&gt; plugin is activated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Span Creation&lt;/strong&gt;: The &lt;code&gt;ClassInstrumentation&lt;/code&gt; plugin detects &lt;code&gt;jwt.verify&lt;/code&gt; calls, creates, and ends a new span named &lt;code&gt;jwt.verify&lt;/code&gt;, which represents authentication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Span Attributes&lt;/strong&gt;: The span has metadata such as the role that comes from the payload.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Database Operations (Internal Layer):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plugin&lt;/strong&gt;: The &lt;code&gt;db&lt;/code&gt; plugin initializes the database connection pool and retrieves a connection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Span Creation&lt;/strong&gt;: The &lt;code&gt;ClassInstrumentation&lt;/code&gt; plugin detects &lt;code&gt;StorageKnexDB.runQuery&lt;/code&gt; calls and creates and ends a new span named &lt;code&gt;StorageKnexDB.runQuery.{{queryName}}&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Span Attributes&lt;/strong&gt;: The spans store the query name if present.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context Propagation&lt;/strong&gt;: The database query is instrumented using &lt;code&gt;@opentelemetry/instrumentation-knex&lt;/code&gt;, and any calls to the database or database pool will be automatically linked to the main trace context.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Business Logic (Storage Layer):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Span Creation&lt;/strong&gt;: When &lt;code&gt;Storage&lt;/code&gt;, &lt;code&gt;ObjectStorage&lt;/code&gt;, &lt;code&gt;Uploader&lt;/code&gt;, etc. methods are called, &lt;code&gt;ClassInstrumentation&lt;/code&gt; creates a span such as &lt;code&gt;Storage.createBucket&lt;/code&gt; or &lt;code&gt;Uploader.upload&lt;/code&gt;, using the &lt;code&gt;methodsToInstrument&lt;/code&gt; configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Span Attributes&lt;/strong&gt;: Span's name and attributes are configured by &lt;code&gt;setName&lt;/code&gt; and &lt;code&gt;setAttributes&lt;/code&gt; functions when available.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chaining&lt;/strong&gt;: If nested calls within the same trace are made, those spans will automatically be children of the parent operation; this also happens with database calls using the knex instrumentation.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Backend Interactions (Storage/Backend Layer)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Span Creation&lt;/strong&gt;: When a function like &lt;code&gt;S3Backend.getObject&lt;/code&gt; or &lt;code&gt;S3Backend.uploadObject&lt;/code&gt; is called, a new span with the name &lt;code&gt;S3Backend.getObject&lt;/code&gt; or &lt;code&gt;S3Backend.uploadObject&lt;/code&gt; is created by &lt;code&gt;ClassInstrumentation&lt;/code&gt;, respectively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Span Attributes&lt;/strong&gt;: Span contains attributes such as &lt;code&gt;operation: command.constructor.name&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context Propagation&lt;/strong&gt;: When an API call is made using &lt;code&gt;aws-sdk&lt;/code&gt;, the request is automatically added to the OTEL context to ensure all spans are connected; this happens because of &lt;code&gt;@opentelemetry/instrumentation-aws-sdk&lt;/code&gt; instrumentation.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Async Operations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Context Propagation&lt;/strong&gt;: If async operations are performed, and a new span is to be created, the context should be carried forward using the &lt;code&gt;trace.getTracer().startActiveSpan&lt;/code&gt; function to create and automatically activate the span, so that the new span will correctly be associated with the request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Span Attributes&lt;/strong&gt;: It might contain attributes depending on what method is calling it.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Response Phase (HTTP Layer):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plugin&lt;/strong&gt;: &lt;code&gt;traceServerTime&lt;/code&gt; is used to measure the server time it took to complete the response, capturing time spent in queue, database, HTTP operations, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Span Attributes&lt;/strong&gt;: If the tracing mode is set to &lt;code&gt;debug&lt;/code&gt;, spans collected using the TraceCollector are serialized as JSON and added as a &lt;code&gt;stream&lt;/code&gt; attribute to the main HTTP span.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Span End&lt;/strong&gt;: All spans created using the active context, including the root &lt;code&gt;http.request&lt;/code&gt; span, are ended. This finalizes the span, collects metrics, and prepares it for export.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging&lt;/strong&gt;: The &lt;code&gt;logRequest&lt;/code&gt; plugin logs the request, including the status code and response time, and the &lt;code&gt;serverTimes&lt;/code&gt; that were captured with the &lt;code&gt;traceServerTime&lt;/code&gt; plugin.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Exporting Spans:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OTLP Exporter&lt;/strong&gt;: If configured, the OpenTelemetry Collector &lt;code&gt;BatchSpanProcessor&lt;/code&gt; batches up the created spans using the &lt;code&gt;OTLPTraceExporter&lt;/code&gt; and sends them to the OTEL endpoint.

&lt;ul&gt;
&lt;li&gt;This is done asynchronously in the background.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Metrics Collection
&lt;/h2&gt;

&lt;p&gt;Metrics provide a numerical representation of system behavior, such as request rates, duration, etc. This system exposes metrics via a Prometheus endpoint, and here's how they are collected:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Request Level (HTTP Layer):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Counters&lt;/strong&gt;: The &lt;code&gt;fastify-metrics&lt;/code&gt; plugin collects HTTP request-related metrics such as request counts, duration, and error counts. It also stores the request data into the &lt;code&gt;storage_api_http_request_duration_seconds&lt;/code&gt; metrics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Labels&lt;/strong&gt;: Metrics collected by the &lt;code&gt;fastify-metrics&lt;/code&gt; plugin include &lt;code&gt;method&lt;/code&gt;, &lt;code&gt;route&lt;/code&gt;, and &lt;code&gt;status_code&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Label Aggregation&lt;/strong&gt;: All Prometheus labels for HTTP are stored in &lt;code&gt;fastify-metrics&lt;/code&gt; as tags.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Database Operations (Internal Layer):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Histograms&lt;/strong&gt;: The &lt;code&gt;DbQueryPerformance&lt;/code&gt; histogram records the time it takes for a database query to complete. It captures the time spent waiting for connection and the database query time as well as labels &lt;code&gt;region&lt;/code&gt; and the method name, stored as &lt;code&gt;name&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connections&lt;/strong&gt;: The metrics &lt;code&gt;DbActivePool&lt;/code&gt; and &lt;code&gt;DbActiveConnection&lt;/code&gt; track the pool connection counts and active connections.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;S3 Operations (Storage/Backend Layer):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Histograms&lt;/strong&gt;: The &lt;code&gt;S3UploadPart&lt;/code&gt; histogram records the time it takes to upload a part of a large file to the object storage service. It has one label, &lt;code&gt;region&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Uploads:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gauges&lt;/strong&gt;: &lt;code&gt;FileUploadStarted&lt;/code&gt; is incremented when the file upload process starts. &lt;code&gt;FileUploadedSuccess&lt;/code&gt; is incremented when an upload completes successfully.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Labels&lt;/strong&gt;: The upload-related metrics include labels for region and upload type (standard or multipart).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Queue Operations (Internal Layer):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Histograms&lt;/strong&gt;: &lt;code&gt;QueueJobSchedulingTime&lt;/code&gt; records the time it took to schedule the message to the queue, labeled with &lt;code&gt;name&lt;/code&gt;, which is usually the queue name and &lt;code&gt;region&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gauges&lt;/strong&gt;: &lt;code&gt;QueueJobScheduled&lt;/code&gt; for the messages scheduled to be processed by the queue, labeled with &lt;code&gt;name&lt;/code&gt; and &lt;code&gt;region&lt;/code&gt;, &lt;code&gt;QueueJobCompleted&lt;/code&gt; for the number of completed messages, &lt;code&gt;QueueJobRetryFailed&lt;/code&gt; for the number of failed retries on each message, and &lt;code&gt;QueueJobError&lt;/code&gt;, which is the total count of errored messages. The labels used here are the queue names and region.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;HTTP Agent Metrics&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gauges&lt;/strong&gt;: &lt;code&gt;HttpPoolSocketsGauge&lt;/code&gt; for the number of active sockets, &lt;code&gt;HttpPoolFreeSocketsGauge&lt;/code&gt; for the number of free sockets, &lt;code&gt;HttpPoolPendingRequestsGauge&lt;/code&gt; for the pending requests, &lt;code&gt;HttpPoolErrorGauge&lt;/code&gt; for the errors, each one of them having &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;region&lt;/code&gt;, &lt;code&gt;protocol&lt;/code&gt;, and &lt;code&gt;type&lt;/code&gt; as labels.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Supavisor Metrics&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Custom Exporter&lt;/strong&gt;: The Supavisor has a custom Prometheus exporter that collects information about pool sizes, tenant status, connected clients, etc. These are collected by the Prometheus config file.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Key Takeaways:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;End-to-End Visibility&lt;/strong&gt;: Tracing provides a complete view of the request lifecycle, including HTTP, DB, and file I/O operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource-Specific Metrics&lt;/strong&gt;: Metrics provide an overview of request performance with different labels.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with OpenTelemetry&lt;/strong&gt;: The use of OpenTelemetry allows traces to be sent to observability backends.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with Prometheus&lt;/strong&gt;: The usage of Prometheus makes it easy to collect and visualize metrics.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>supabase</category>
    </item>
    <item>
      <title>Understanding Storage backends in supabase/storage</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Tue, 14 Jan 2025 07:24:00 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/understanding-storage-backends-in-supabasestorage-4dff</link>
      <guid>https://dev.to/ajitsinghkaler/understanding-storage-backends-in-supabasestorage-4dff</guid>
      <description>&lt;h1&gt;
  
  
  Storage Backends
&lt;/h1&gt;

&lt;p&gt;A storage backend is an abstraction layer. Instead of directly interacting with specific storage systems (like a local file system, AWS S3, Google Cloud Storage, etc.), the repository interacts with a generic interface. This interface defines a set of operations (like getObject, putObject, deleteObject, etc.) that any conforming storage backend must implement. &lt;/p&gt;

&lt;p&gt;There are two storage backends used in this Supabase Storage repository. They are &lt;code&gt;FileBackend&lt;/code&gt; and &lt;code&gt;S3Backend&lt;/code&gt;. lets learn more about them and how they interact with the file system and the database, and the key differences between them.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. &lt;code&gt;FileBackend&lt;/code&gt; Storage Adapter
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt; The &lt;code&gt;FileBackend&lt;/code&gt; adapter is designed to store and retrieve data using the local file system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Local File Storage:&lt;/strong&gt; Files are stored directly on the server's disk, organized within a directory structure based on the tenant, bucket, key (object path), and version information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata Storage:&lt;/strong&gt; The file metadata is stored within the extended file attributes of the file system. Extended attributes are part of the file system and are a better approach than using a metadata file since it avoids having multiple files in the same location for just the file and the metadata.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Direct File Access:&lt;/strong&gt; Operations read and write to disk directly, without any intermediate layer or external service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Part Uploads:&lt;/strong&gt; Creating parts in a temporary directory and then concatenating them into a single file when the upload is complete.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Implementation Details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Initialization:&lt;/strong&gt; The constructor fetches the storage path from the configuration or from the &lt;code&gt;FILE_STORAGE_BACKEND_PATH&lt;/code&gt; env variable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;getObject&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;It constructs the file path using the bucket name, object key, and version.&lt;/li&gt;
&lt;li&gt;It reads the file from the disk using &lt;code&gt;fs.createReadStream&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;It retrieves metadata (content-type, cache-control) from the extended file attributes, using the &lt;code&gt;fs-xattr&lt;/code&gt; package to get those values.&lt;/li&gt;
&lt;li&gt;If the request is using range headers, it will stream data from disk at a specified range.&lt;/li&gt;
&lt;li&gt;Returns a stream along with the file metadata.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;uploadObject&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;The system will ensure a specific folder exists using &lt;code&gt;fs-extra.ensureFile&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Constructs the file path from given information.&lt;/li&gt;
&lt;li&gt;Writes the incoming stream to the file system using &lt;code&gt;fs.createWriteStream&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Sets metadata on the disk using the &lt;code&gt;xattr&lt;/code&gt; package.&lt;/li&gt;
&lt;li&gt;Returns the metadata by reading using the &lt;code&gt;headObject&lt;/code&gt; method.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;deleteObject&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Constructs the file path using the bucket name, object key, and version.&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;fs.remove&lt;/code&gt; to delete the file from the file system.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;copyObject&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Constructs source and destination file paths.&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;fs.copyFile&lt;/code&gt; to copy the file and ensures the folder where the files are being copied exists.&lt;/li&gt;
&lt;li&gt;It copies the metadata information from extended attributes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;deleteObjects&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;fs.rm&lt;/code&gt; to recursively remove folders and files with the given prefixes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;headObject&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Constructs the file path based on the request.&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;fs.stat&lt;/code&gt; to retrieve file system metadata.&lt;/li&gt;
&lt;li&gt;Reads file metadata using the &lt;code&gt;xattr&lt;/code&gt; package.&lt;/li&gt;
&lt;li&gt;Calculates the checksum using the &lt;code&gt;md5-file&lt;/code&gt; package.&lt;/li&gt;
&lt;li&gt;Returns relevant file metadata (size, content-type, cache-control, etag, last modified date).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;createMultiPartUpload&lt;/code&gt;:&lt;/strong&gt; Creates a folder where the file parts will be stored for multipart uploads, and a metadata file with the extra configurations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;uploadPart&lt;/code&gt;:&lt;/strong&gt; Creates a file part where the data is saved into, and sets the Etag metadata attribute.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;completeMultipartUpload&lt;/code&gt;:&lt;/strong&gt; Once all file parts are available, it concatenates all the files together using &lt;code&gt;multistream&lt;/code&gt; and calls the &lt;code&gt;uploadObject&lt;/code&gt; method to finalize the upload. It also removes all the temporary file parts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;abortMultipartUpload&lt;/code&gt;:&lt;/strong&gt; Removes the folder where the parts are stored.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;uploadPartCopy&lt;/code&gt;:&lt;/strong&gt; Copies parts from already uploaded files and stores them into a newly created temporary file that it later uses in &lt;code&gt;completeMultipartUpload&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;privateAssetUrl&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Returns a special local file path for internal processing purposes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies:&lt;/strong&gt; Uses packages like &lt;code&gt;fs-extra&lt;/code&gt;, &lt;code&gt;path&lt;/code&gt;, &lt;code&gt;md5-file&lt;/code&gt;, &lt;code&gt;fs-xattr&lt;/code&gt;, and &lt;code&gt;multistream&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Interaction with the Database:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;FileBackend&lt;/code&gt; adapter uses the database to retrieve tenant and bucket information. All file operations are performed on the disk and data is stored using &lt;code&gt;StorageKnexDB&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity:&lt;/strong&gt; It's easy to set up and reason about; it's simple to debug since everything lives on the file system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No External Dependency:&lt;/strong&gt; It does not rely on an external service and can be started quickly on a local environment without the need for external dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lower Latency:&lt;/strong&gt; File reads and writes are performed locally, avoiding extra network calls.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Not Scalable:&lt;/strong&gt; It doesn't scale well beyond a single server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited Reliability:&lt;/strong&gt; Data is at risk if the server is damaged.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inadequate for Production:&lt;/strong&gt; It is not suitable for production environments if they need reliability and high availability.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. &lt;code&gt;S3Backend&lt;/code&gt; Storage Adapter
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; The &lt;code&gt;S3Backend&lt;/code&gt; adapter interacts with an S3-compatible object storage service (like AWS S3, MinIO), abstracting away the specifics of the API. This is the preferred implementation if you want to achieve better scalability and high availability.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Remote Object Storage:&lt;/strong&gt; Uses an S3-compatible object store for the long-term storage of object data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS SDK:&lt;/strong&gt; It uses the AWS SDK for NodeJS to interact with S3 services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication &amp;amp; Authorization:&lt;/strong&gt; It uses AWS SDK's authentication mechanisms based on access keys, secret keys, session tokens, and policies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Part Uploads:&lt;/strong&gt; It supports multi-part uploads for large files using &lt;code&gt;uploadPart&lt;/code&gt; and &lt;code&gt;createMultiPartUpload&lt;/code&gt; operations from the AWS SDK, and also has support for copying parts using &lt;code&gt;uploadPartCopy&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Implementation Details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Initialization:&lt;/strong&gt; The constructor gets configuration from the environment, such as storage bucket name, access key, secret key, region, and endpoint. The constructor also creates a custom HTTP agent and monitors it if tracing is enabled to collect HTTP socket usage information.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;getObject&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Constructs an AWS &lt;code&gt;GetObjectCommand&lt;/code&gt; using the provided bucket name and key (path).&lt;/li&gt;
&lt;li&gt;Uses the AWS SDK to send this command to the S3 endpoint.&lt;/li&gt;
&lt;li&gt;The response, including body and metadata, is returned. It also adds all metadata from the S3 response to the metadata object. If a range header is provided, it gets a partial response using ranged requests.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;uploadObject&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Constructs an AWS &lt;code&gt;PutObjectCommand&lt;/code&gt; with the request body and metadata.&lt;/li&gt;
&lt;li&gt;If tracing is enabled, it uses the &lt;code&gt;monitorStream&lt;/code&gt; function to keep track of upload speeds. If the file is larger than the allowed &lt;code&gt;uploadFileSizeLimit&lt;/code&gt;, the upload will be aborted.&lt;/li&gt;
&lt;li&gt;Returns the object metadata.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;deleteObject&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Constructs a &lt;code&gt;DeleteObjectCommand&lt;/code&gt; using bucket name and key.&lt;/li&gt;
&lt;li&gt;Sends the request to S3 to delete the object.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;copyObject&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Constructs a &lt;code&gt;CopyObjectCommand&lt;/code&gt; using bucket name, source key, and destination key.&lt;/li&gt;
&lt;li&gt;Uses the AWS SDK to send the copy command.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;deleteObjects&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Constructs a &lt;code&gt;DeleteObjectsCommand&lt;/code&gt; using bucket name and keys.&lt;/li&gt;
&lt;li&gt;Sends the request to S3 to delete multiple objects at once.&lt;/li&gt;
&lt;li&gt;Returns a list of deleted or failed delete operations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;headObject&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Constructs an AWS &lt;code&gt;HeadObjectCommand&lt;/code&gt; using the provided bucket name and key.&lt;/li&gt;
&lt;li&gt;Returns metadata information from the AWS S3 object.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;privateAssetUrl&lt;/code&gt; Method:&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Creates a private URL using the &lt;code&gt;getSignedUrl&lt;/code&gt; function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;createMultiPartUpload&lt;/code&gt;:&lt;/strong&gt; Creates a multi-part upload using the &lt;code&gt;CreateMultipartUploadCommand&lt;/code&gt; and returns the &lt;code&gt;UploadId&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;uploadPart&lt;/code&gt;:&lt;/strong&gt; Creates a part of the upload using &lt;code&gt;UploadPartCommand&lt;/code&gt; and returns the &lt;code&gt;eTag&lt;/code&gt; of the upload.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;completeMultipartUpload&lt;/code&gt;:&lt;/strong&gt; Commits a multipart upload by using &lt;code&gt;CompleteMultipartUploadCommand&lt;/code&gt;, which expects the part information. If no parts are provided, it will fetch the list of parts before completing the upload.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;abortMultipartUpload&lt;/code&gt;:&lt;/strong&gt; Aborts an upload using &lt;code&gt;AbortMultipartUploadCommand&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;uploadPartCopy&lt;/code&gt;:&lt;/strong&gt; Copies a specific byte range of an existing object and uploads it as part of a multipart upload using &lt;code&gt;UploadPartCopyCommand&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies:&lt;/strong&gt; Depends on the &lt;code&gt;@aws-sdk/client-s3&lt;/code&gt;, &lt;code&gt;@aws-sdk/lib-storage&lt;/code&gt;, &lt;code&gt;@aws-sdk/s3-request-presigner&lt;/code&gt;, and &lt;code&gt;@smithy/node-http-handler&lt;/code&gt; packages for interacting with S3 services, and also &lt;code&gt;@internal/http&lt;/code&gt; and &lt;code&gt;@internal/streams&lt;/code&gt; for stream monitoring and handling.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Interaction with the Database:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Like the &lt;code&gt;FileBackend&lt;/code&gt;, the &lt;code&gt;S3Backend&lt;/code&gt; doesn't interact directly with the database; instead, it uses the &lt;code&gt;StorageKnexDB&lt;/code&gt; class to manage the database operations. It only saves metadata information about the object in the database, such as size, mime-type, date of upload, etc.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability and High Availability:&lt;/strong&gt; Leverages the scalability and reliability of the S3 service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Durability and Redundancy:&lt;/strong&gt; Data is stored across multiple data centers for redundancy and durability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better for Production:&lt;/strong&gt; Production applications should use an S3-compatible backend.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complexity:&lt;/strong&gt; Requires configuration with an S3 provider.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency:&lt;/strong&gt; Might be prone to higher latency as every request goes through the network.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Differences Summarized
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;&lt;code&gt;FileBackend&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;&lt;code&gt;S3Backend&lt;/code&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Storage Location&lt;/td&gt;
&lt;td&gt;Local file system&lt;/td&gt;
&lt;td&gt;Remote object storage (S3-compatible)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scalability&lt;/td&gt;
&lt;td&gt;Not scalable&lt;/td&gt;
&lt;td&gt;Highly scalable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reliability&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Complexity&lt;/td&gt;
&lt;td&gt;Simpler to set up&lt;/td&gt;
&lt;td&gt;Requires S3 API configuration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Handling&lt;/td&gt;
&lt;td&gt;Direct file I/O&lt;/td&gt;
&lt;td&gt;API calls via AWS SDK&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Metadata&lt;/td&gt;
&lt;td&gt;Extended file attributes&lt;/td&gt;
&lt;td&gt;S3 metadata&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ideal Use Cases&lt;/td&gt;
&lt;td&gt;Development, testing, local use if you are self hosting.&lt;/td&gt;
&lt;td&gt;Production, scalable, and reliable storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Authentication&lt;/td&gt;
&lt;td&gt;No authentication&lt;/td&gt;
&lt;td&gt;Supports AWS S3 authentication&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Instrumentation&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Support for custom HTTP agent and tracing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;FileBackend&lt;/code&gt; is for local development if you do self hosting or simple scenarios, while &lt;code&gt;S3Backend&lt;/code&gt; is for production usage or wherever scalability, high availability, or data durability is required. The way both adapters interact with the database is similar by using &lt;code&gt;StorageKnexDB&lt;/code&gt;, as the storage layer acts as an abstraction between those adapters and the data persistence layer. The adapters just focus on reading and writing data on the storage backend.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>supabase</category>
    </item>
    <item>
      <title>Auth in Supabase storage</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Tue, 14 Jan 2025 06:25:52 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/auth-in-supabase-storage-3ik8</link>
      <guid>https://dev.to/ajitsinghkaler/auth-in-supabase-storage-3ik8</guid>
      <description>&lt;h1&gt;
  
  
  Authentication and Authorization
&lt;/h1&gt;

&lt;p&gt;Now we will go through the authentication and authorization lifecycle. How authorization and authentication work in the storage repository. We will understand the various parts involved in authentication and authorization and how they work together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication and Authorization Components
&lt;/h2&gt;

&lt;p&gt;The Supabase storage repository employs a combination of JWTs (JSON Web Tokens) and row-level security (RLS) policies to ensure that only authorized users and services can access resources. It uses JWT for authentication and RLS for authorization. Here's a step-by-step breakdown:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Authentication:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JWT Acquisition:&lt;/strong&gt; Clients obtain JWTs through the Supabase Auth service or through the storage service for a signed upload. These JWTs contain claims that specify the user's identity, roles, and permissions. With these JWTs, the client can access the storage service. Sometimes, when signed URLs are generated by the storage service, the JWT also contains the resource to which it has access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request Headers:&lt;/strong&gt; Clients making requests to the Storage API include the JWT in the &lt;code&gt;Authorization&lt;/code&gt; header, typically using the &lt;code&gt;Bearer &amp;lt;token&amp;gt;&lt;/code&gt; format (or via query params for upload signed URLs).&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;JWT Extraction (HTTP Layer):&lt;/strong&gt;&lt;br&gt;
    * The &lt;code&gt;auth-jwt&lt;/code&gt; plugin extracts the JWT from the request's &lt;code&gt;Authorization&lt;/code&gt; header or query param.&lt;br&gt;
    * It removes the &lt;code&gt;Bearer&lt;/code&gt; prefix if present.&lt;br&gt;
    * The extracted JWT is stored in &lt;code&gt;request.jwt&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JWT Verification (Internal Layer):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;verifyJWT&lt;/code&gt; function from &lt;code&gt;src/internal/auth/jwt.ts&lt;/code&gt; is called.&lt;/li&gt;
&lt;li&gt;It uses the server's JWT secret or JWKS to verify the signature and validity of the JWT.&lt;/li&gt;
&lt;li&gt;If signature validation fails, a 401 error is returned.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Retrieving the Key for Signature Verification:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The function &lt;code&gt;getJWTVerificationKey&lt;/code&gt; determines which key to use for verification based on the &lt;code&gt;kid&lt;/code&gt; on the token or the static secret.&lt;/li&gt;
&lt;li&gt;If the alg header from the token is using RSA or ECC, the key will be determined using &lt;code&gt;kty&lt;/code&gt; and &lt;code&gt;kid&lt;/code&gt;, or the static secret will be returned.&lt;/li&gt;
&lt;li&gt;If there is an issue while extracting the public key from JWKS, the system will fall back to use the configured static key if the algorithm is HS256 or fail.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;JWT Payload Parsing:&lt;/strong&gt; If the JWT is valid, the payload is parsed and saved in &lt;code&gt;request.jwtPayload&lt;/code&gt;. The &lt;code&gt;role&lt;/code&gt; key is assigned to &lt;code&gt;request.jwtPayload.role&lt;/code&gt; if available.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Request Decoration:&lt;/strong&gt; If the JWT is valid, the system also sets &lt;code&gt;request.isAuthenticated&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt;.

&lt;ul&gt;
&lt;li&gt;If an invalid or no JWT is provided but the route has the &lt;code&gt;allowInvalidJwt&lt;/code&gt; option, &lt;code&gt;request.jwtPayload&lt;/code&gt; and &lt;code&gt;request.isAuthenticated&lt;/code&gt; are set to &lt;code&gt;anon&lt;/code&gt; and &lt;code&gt;false&lt;/code&gt;, respectively.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Authorization Through RLS:&lt;/strong&gt; The RLS will check the authorization on specific database queries as explained in the next section.&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Authorization and Row Level Security (RLS):&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Database Queries:&lt;/strong&gt; When performing database operations (reading, creating, updating, deleting records), queries go through RLS policies.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RLS Policies:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;RLS policies are defined in the database using PostgreSQL's row-level security mechanism. These policies are configured by using the &lt;code&gt;storage.install_roles&lt;/code&gt; flag; the roles that you set when running the migrations are set on the database as well.&lt;/li&gt;
&lt;li&gt;RLS policies define rules for which users can access which rows of the database. These policies are based on the context of the request, such as &lt;code&gt;auth.uid()&lt;/code&gt; (user ID) and &lt;code&gt;auth.role()&lt;/code&gt; (user role) claims inside a JWT.&lt;/li&gt;
&lt;li&gt;For example, a simple policy can be written as &lt;code&gt;USING(owner = auth.uid())&lt;/code&gt;, which means only the objects with the same &lt;code&gt;owner&lt;/code&gt; can be viewed, modified, or deleted.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Dynamic RLS:&lt;/strong&gt; The system will extract the current authenticated user information from the JWT token and add it to the current database context. The RLS policy will then use these values to validate the user permissions against the database.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Enforcement:&lt;/strong&gt; The database automatically applies the RLS policies on queries and operations to restrict data access.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Error Handling:&lt;/strong&gt; If an RLS policy violation occurs, PostgreSQL returns an error. The application catches this error using the &lt;code&gt;DatabaseError&lt;/code&gt; error handler and returns a 403 (Unauthorized) response.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Bypass with Service Key:&lt;/strong&gt; For server operations, the system uses the service key that is generated and passed on the JWT token; this can be used to bypass any RLS rules in the database.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Scope Setting:&lt;/strong&gt; Before each database query, using the &lt;code&gt;TenantConnection.setScope&lt;/code&gt;, the system sets the required options to the &lt;code&gt;current_setting&lt;/code&gt;, allowing RLS to extract the required information from it.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Tracing:&lt;/strong&gt; If tracing is enabled, a span named &lt;code&gt;knex.query&lt;/code&gt; is created using the &lt;code&gt;@opentelemetry/instrumentation-knex&lt;/code&gt;, which shows the database query that was performed.&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Presigned S3 URLs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The presigned S3 URL follows the same flow as a normal GET object request.&lt;/li&gt;
&lt;li&gt;The URL is signed on the server side to allow access to the protected resources; it is used when uploading files or accessing private files using the S3 protocol.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;signJWT&lt;/code&gt; is used to sign the URL using the server secrets.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The authentication process happens using the &lt;code&gt;x-signature&lt;/code&gt; header, which has the value of the signed JWT.

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;verifyObjectSignature&lt;/code&gt; will verify if the provided token is valid using the &lt;code&gt;verifyJWT&lt;/code&gt; function, and the function will check if the URL on the JWT matches the route.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Admin API Key:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Any request to the &lt;code&gt;/admin&lt;/code&gt; routes needs to send an &lt;code&gt;apikey&lt;/code&gt; in the header to pass the validation; if this header is missing or invalid, the request will be refused.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Role of Schemas in Validation
&lt;/h2&gt;

&lt;p&gt;JSON Schemas are used here to validate the request and response data.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Request Validation:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Definition:&lt;/strong&gt; JSON Schema is used to define the structure and data types of incoming HTTP requests (both params, body, and headers).

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enforcement:&lt;/strong&gt; Fastify uses the &lt;code&gt;ajv&lt;/code&gt; library to validate all incoming HTTP requests against the defined schemas before the route handler is called.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling:&lt;/strong&gt; If a request doesn't conform to the schema, Fastify automatically returns a 400 error with information about the validation failure.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Structure:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Definition:&lt;/strong&gt; Schemas also help define the properties of the returned objects, which are later used to define interfaces using the &lt;code&gt;json-schema-to-ts&lt;/code&gt; library such as &lt;code&gt;Obj&lt;/code&gt;, &lt;code&gt;Bucket&lt;/code&gt;, &lt;code&gt;UploadMetadata&lt;/code&gt;, and more.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Metadata Validation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User metadata is validated as a generic &lt;code&gt;JSONB&lt;/code&gt;, so it doesn't have specific validation of its content to avoid having schema errors during the upload, as the content can be anything.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How a request is authenticated and authorized
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Client Request:&lt;/strong&gt; A client attempts to upload a file, create a bucket, download an object, or any other action.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JWT Verification:&lt;/strong&gt; The server verifies the client's JWT, extracting authentication information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema Validation:&lt;/strong&gt; The server validates the client's request payload against the API schemas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Interaction:&lt;/strong&gt; If authentication and validation are successful, a database transaction is created, and the data is sent to the database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RLS Enforcement:&lt;/strong&gt; The database applies row-level security policies using data extracted from the JWT, restricting access to resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operation Execution:&lt;/strong&gt; If authorization is successful, then the operation is executed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response Generation:&lt;/strong&gt; The system returns a response after completing all validation, auth, and permission steps.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Improved Data Integrity:&lt;/strong&gt; Schemas enforce consistency and correctness of input data, preventing invalid data from entering the system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Declarative Approach:&lt;/strong&gt; Using schemas in code makes it easier to see what the expected parameters are for every API call.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer Experience:&lt;/strong&gt; The process provides a better developer experience by adding type safety and auto-validation features.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extensible:&lt;/strong&gt; Since the system only works on two levels—one is JWT and the other is RLS—any system that uses JWT and RLS can be used with the storage repository. So, you only need JWT and Postgres to use the storage repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ends our exploration of the auth/authentication, RLS, and schema workflow. You should now have a clear view of how those pieces play an important role in every request, making the Supabase Storage Engine a reliable and secure system. Let me know if you have more questions or if you want to explore any topic in more detail.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>supabase</category>
    </item>
    <item>
      <title>Overview of supabase storage repository</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Tue, 14 Jan 2025 04:37:28 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/learning-about-supabase-storage-repository-4c7j</link>
      <guid>https://dev.to/ajitsinghkaler/learning-about-supabase-storage-repository-4c7j</guid>
      <description>&lt;h1&gt;
  
  
  Supabase storage Repository Overview
&lt;/h1&gt;

&lt;p&gt;To understand the supabase storage repository, we need to understand the various components and how they interact with each other. For that we will break down how the various components in this Supabase Storage repository interact with each other. It has many components, but I first list and discuss the major ones:&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Core Concepts:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Storage Backend:&lt;/strong&gt; This is the service, responsible for managing object storage. It handles 2 storage backends (S3 and local filesystem), manages metadata in a database, enforces access control (RLS), and provides APIs for clients.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Fastify:&lt;/strong&gt; This is the web framework used to create the HTTP API for the Storage service.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;PostgreSQL:&lt;/strong&gt; It is the primary data storage for metadata (objects, buckets, user defined metadata, multipart uploads, etc..)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;S3:&lt;/strong&gt; It is one of the supported storage backends where the actual files are stored in object stores like AWS S3, Digital Ocean Spaces, Cloudflare R2, etc..&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;TUS:&lt;/strong&gt; It is the protocol implemented for resumable uploads.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Image Proxy ([Imgproxy)[]:&lt;/strong&gt; An external service used for transforming images (resizing, formatting).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Metrics:&lt;/strong&gt; Prometheus used for tracking the performance of the application and monitoring all operations.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Authentication/Authorization:&lt;/strong&gt; Enforces access control using JWTs and Postgres Row Level Security (RLS).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Tenants:&lt;/strong&gt;  Used in multi-tenant configurations to isolate data and configurations for different customers. This is how supabase manages multiple clients in a single instance.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Tracing:&lt;/strong&gt; OpenTelemetry (OTel) for distributed tracing, used to debug performance issues.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Streaming &amp;amp; Events:&lt;/strong&gt; Streams and message queue systems (PgBoss, pubsub) for asynchronous operations, creating, and reacting to events (webhooks).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Schemas:&lt;/strong&gt; JSON schemas for request and response validation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Interaction from a request lifecycle perspective
&lt;/h2&gt;

&lt;p&gt;Here we will take a look at how the various components interact with each other when a request is made to the storage service so that we can understand the flow of data and how the various components are used to perform the request and interact with the request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interactions in Detail:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Client Request:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  A client sends an HTTP request to the Storage API (using Fastify).

&lt;ul&gt;
&lt;li&gt;  The request can be for various operations like:

&lt;ul&gt;
&lt;li&gt;  Creating a bucket.&lt;/li&gt;
&lt;li&gt;  Uploading/downloading objects.&lt;/li&gt;
&lt;li&gt;  Transforming images.&lt;/li&gt;
&lt;li&gt;  Listing objects in a bucket.&lt;/li&gt;
&lt;li&gt;  Initiating or completing a TUS upload.&lt;/li&gt;
&lt;li&gt;  S3 protocol operations&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Authentication and Authorization:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;JWT Authentication:&lt;/strong&gt; The &lt;code&gt;jwt&lt;/code&gt; plugin verifies the request's &lt;code&gt;Authorization&lt;/code&gt; header for a valid JWT.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Tenant ID Extraction:&lt;/strong&gt; The &lt;code&gt;tenant-id&lt;/code&gt; plugin extracts the tenant ID from the subdomain of the &lt;code&gt;x-forwarded-host&lt;/code&gt; header (in multi-tenant setups) or from an environment variable for single-tenant.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;API Key Authentication:&lt;/strong&gt; Admin requests (e.g. multi-tenant management) use the &lt;code&gt;apikey&lt;/code&gt; plugin. Which checks for the &lt;code&gt;apikey&lt;/code&gt; header in the request.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Request Context:&lt;/strong&gt; After the JWT validation, it adds tenant specific information to the request object.

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;request.tenantId&lt;/code&gt; is set with the extracted or default tenant ID.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;request.isAuthenticated&lt;/code&gt; is set to &lt;code&gt;true&lt;/code&gt; if the JWT is valid, &lt;code&gt;false&lt;/code&gt; otherwise.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;request.jwtPayload&lt;/code&gt; contains the decoded JWT claims.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;request.owner&lt;/code&gt; contains the subject (&lt;code&gt;sub&lt;/code&gt;) claim from the JWT.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Request Handling and Validation:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Fastify Routing:&lt;/strong&gt; Fastify routes the request to the appropriate handler function.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Schema Validation:&lt;/strong&gt; Each route is associated with a JSON schema. The body, params, and querystring of the request are validated using &lt;a href="https://github.com/ajv-validator/ajv" rel="noopener noreferrer"&gt;ajv&lt;/a&gt; against the defined schema, to ensure that request are proper.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Plugins&lt;/strong&gt;: Several plugins are attached to the request object to perform certain action.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;db&lt;/code&gt;&lt;/strong&gt;: It's an authentication specific connection to the database, that has all necessary information to be able to set the correct role in Postgres.This is used to perform all the database operations and helps in managing RLS(Row Level Security) for the tenant.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;storage&lt;/code&gt;&lt;/strong&gt;: Instance of the storage class with all the required backend/database specific instance of the classes.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;metrics&lt;/code&gt;&lt;/strong&gt;: Collects request level metrics like number of requests, timing to provide performance data on the application.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Storage Layer Interaction:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Storage class&lt;/strong&gt;: The storage class is an orchestrator for all operations, and interacts with backend and database abstractions to fetch the data or perform an action.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Object Storage:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;S3 Backend:&lt;/strong&gt; When the storage configuration is S3 then storage operations will be interacting with an S3 compatible object store. The &lt;code&gt;s3&lt;/code&gt; plugin creates a new S3 client for each request, with the tenant specific settings (region, endpoint etc..).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;File Backend:&lt;/strong&gt; When the storage configuration is local then the storage operations will be interacted with a local disk to perform all the file operation.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;Upload&lt;/code&gt; Class:&lt;/strong&gt; It handles all the multipart or single part upload to the backend, this class also handles the logic when upload limits are defined in the bucket configuration.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Database Interaction:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  The &lt;code&gt;db&lt;/code&gt; plugin creates an authenticated database connection (&lt;code&gt;TenantConnection&lt;/code&gt;) using &lt;a href="https://knexjs.org/" rel="noopener noreferrer"&gt;knex&lt;/a&gt;, allowing for database operations.&lt;/li&gt;
&lt;li&gt;  Row-Level Security (RLS) is used to enforce access control on the database level, based on JWT claims and the tenant ID.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Image Transformation:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  If the request is for image transformation, the &lt;code&gt;ImageRenderer&lt;/code&gt;(This is a class that handles the creation of actual image) is used, and a secure URL is generated for image proxy to fetch the image from object stores and then transform it as requested.

&lt;ul&gt;
&lt;li&gt;  Rate limiting is applied to prevent abuse of the image transformation service when enabled.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;TUS Protocol&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt; When the TUS protocol is used to upload files, the request is routed to &lt;code&gt;TusServer&lt;/code&gt;, this manages the different operation in the TUS protocol and orchestrates all operations
    *  &lt;code&gt;FileStore&lt;/code&gt; or &lt;code&gt;S3Store&lt;/code&gt; manages the storage of the upload chunks (when uploading resumable files)
    * &lt;code&gt;PgLock&lt;/code&gt; manages the concurrency of upload operations. This makes it such that 2 users are not performing operations on a file at the same time.
    * &lt;code&gt;AlsMemoryKV&lt;/code&gt; manages the metadata storage in the TUS context. This is an async local storage instance that TUS uses to make resumable uploads.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Event System&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When an operation succeeds, a queue message might be generated. Events are used for the following operations. This queue is made in Postgres using &lt;code&gt;PgBoss&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; Webhooks: Used to notify external services about file events (upload/delete).&lt;/li&gt;
&lt;li&gt; Migrations: Handles async migrations of tenant databases, when new migrations are available.&lt;/li&gt;
&lt;li&gt;Object Admin Delete: Asynchronously deletes the previous versions of an object when it gets updated.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Asynchronous Operations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Some operations like webhooks, background migrations, file processing, and deletion can be processed asynchronously through the queue system.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;PgBoss:&lt;/strong&gt; Used as the queue mechanism for asynchronous operations, persisting tasks to the database.

&lt;ul&gt;
&lt;li&gt; &lt;code&gt;RunMigrationsOnTenants&lt;/code&gt; queue handles the migrations of tenant databases.&lt;/li&gt;
&lt;li&gt; &lt;code&gt;ObjectAdminDelete&lt;/code&gt; queue handles the deletion of an object.&lt;/li&gt;
&lt;li&gt; &lt;code&gt;Webhook&lt;/code&gt; queue handles the sending of webhooks to external services.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Postgres Pubsub:&lt;/strong&gt; For real-time notifications, used for cache invalidation when the multi-tenant setup is in use. This uses &lt;code&gt;node&lt;/code&gt; EventEmitter to create the pubsub system.

&lt;ul&gt;
&lt;li&gt;  Invalidate the tenant config cache when the tenant information is changed or deleted.&lt;/li&gt;
&lt;li&gt;  Invalidate the S3 credential cache when the credentials are changed or deleted.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Metrics and Monitoring:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Prometheus:&lt;/strong&gt; Used for gathering metrics.

&lt;ul&gt;
&lt;li&gt; &lt;code&gt;fastify-metrics&lt;/code&gt;: collects http requests.&lt;/li&gt;
&lt;li&gt; &lt;code&gt;pino-logflare&lt;/code&gt;: is used as a custom pino logging strategy to push logs to logflare.&lt;/li&gt;
&lt;li&gt;  Custom metrics are exposed through  the &lt;code&gt;/metrics&lt;/code&gt; endpoint to track database pool usage, queue sizes, s3 object transfers, http request latencies etc...&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Tracing:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;OpenTelemetry:&lt;/strong&gt; Used to trace the execution of requests across different components.&lt;/li&gt;
&lt;li&gt; Spans are added to method calls for a better way to track executions.&lt;/li&gt;
&lt;li&gt; When &lt;code&gt;debug&lt;/code&gt; or &lt;code&gt;logs&lt;/code&gt; tracing level is set, OTel data is added to the logs.&lt;/li&gt;
&lt;li&gt;The OTel collector collects traces in various formats, that are exported to Jaeger or other external consumers.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Data Flow Summary:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Request:&lt;/strong&gt; Client makes an HTTP request.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Authentication:&lt;/strong&gt; JWT, API Key, and tenant extraction.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Authorization&lt;/strong&gt;: RLS and other code based authorization.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Data Access:&lt;/strong&gt; Data is read/written from/to the backend or database.

&lt;ul&gt;
&lt;li&gt;If files are needed to be transferred then files are downloaded from S3 or local system.&lt;/li&gt;
&lt;li&gt; If only metadata is needed, then data is fetched from database.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Transformation:&lt;/strong&gt; If transformation is required then Imgproxy url is built and the asset is fetched from it.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Asynchronous Task:&lt;/strong&gt; If needed, a task is enqueued for async processing.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Response:&lt;/strong&gt; Data is returned to client.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Monitoring &amp;amp; Tracing:&lt;/strong&gt;  Metrics and traces are generated along the way, and exported to the corresponding consumers.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Key Interaction Points:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;HTTP Layer (Fastify) &amp;lt;-&amp;gt; Storage:&lt;/strong&gt; Fastify routes requests to the &lt;code&gt;Storage&lt;/code&gt; class or &lt;code&gt;s3protocolHandler&lt;/code&gt; based on the request parameters or headers. The Storage class uses the DB and S3 client to implement the required action.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Storage &amp;lt;-&amp;gt; Database (PostgreSQL):&lt;/strong&gt;  The &lt;code&gt;Storage&lt;/code&gt; class utilizes the database interface for metadata persistence and RLS enforcement.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Storage &amp;lt;-&amp;gt; Storage Backend:&lt;/strong&gt; The &lt;code&gt;Storage&lt;/code&gt; class relies on a backend adapter (either &lt;code&gt;S3Backend&lt;/code&gt; or &lt;code&gt;FileBackend&lt;/code&gt;) to perform the object storage operations.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Storage  &amp;lt;-&amp;gt; Imgproxy:&lt;/strong&gt; The &lt;code&gt;ImageRenderer&lt;/code&gt; class communicates with the &lt;code&gt;imgproxy&lt;/code&gt; for image transformations by generating presigned urls.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Fastify &amp;lt;-&amp;gt; Queue (PgBoss):&lt;/strong&gt; Queue jobs are created by Fastify routes and handled by workers in a separated pool, enabling async behaviour.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Fastify &amp;lt;-&amp;gt; Authentication:&lt;/strong&gt; Fastify utilizes auth plugins to authorize all the requests and set the &lt;code&gt;request.jwtPayload&lt;/code&gt;, &lt;code&gt;request.isAuthenticated&lt;/code&gt;, &lt;code&gt;request.tenantId&lt;/code&gt;, and &lt;code&gt;request.owner&lt;/code&gt; variables.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Fastify &amp;lt;-&amp;gt; Logging:&lt;/strong&gt; Fastify registers a pino logger in the request context and sets up structured logging for all the requests.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Fastify &amp;lt;-&amp;gt; Monitoring:&lt;/strong&gt; Fastify registers prometheus metrics middleware, to track the HTTP metrics and adds OTel spans for tracing purposes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Internal Components:&lt;/strong&gt; The core components use shared utility libraries and classes, to do perform core operations, like streaming, concurrency control, or error handling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Supabase Storage repository is a highly flexible object storage service. It combines a core with a set of features to store and manipulate files. It can be deployed in several ways, either a single instance, or a multi-tenant instance with tenant configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Key takeaways:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Pluggable Architecture:&lt;/strong&gt; The system is highly modular and built to enable multiple backend and integration points (like S3 or local disk, HTTP, TUS, external image transformers),&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Performance:&lt;/strong&gt; Caching, efficient connection pooling, and async mechanisms.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Observability:&lt;/strong&gt; OTel and prometheus to provide visibility into application's health and performance.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Multi-tenancy support:&lt;/strong&gt; Tenant separation with DB isolation using connection pooling and RLS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This detailed overview should help you understand how the components interact in this system.&lt;/p&gt;

</description>
      <category>supabase</category>
      <category>javascript</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How strong are browsers for file Conversions</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Sun, 07 Jul 2024 03:26:07 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/how-strong-are-browsers-for-file-conversions-245p</link>
      <guid>https://dev.to/ajitsinghkaler/how-strong-are-browsers-for-file-conversions-245p</guid>
      <description>&lt;h2&gt;
  
  
  Introduction: The Need for Browser-Based Conversions
&lt;/h2&gt;

&lt;p&gt;File conversions are a common necessity today. We frequently need to transform files from one format to another for various reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compatibility: Different applications support different file formats.&lt;/li&gt;
&lt;li&gt;Size reduction: Some formats offer better compression for sharing or storage.&lt;/li&gt;
&lt;li&gt;Feature support: Certain formats provide additional features or metadata.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Traditionally, file conversions were handled by desktop applications or server-side processes. However, server-based conversions come with several significant drawbacks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Privacy Concerns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Exposure: Uploading files to a server increases the risk of sensitive information leaking.&lt;/li&gt;
&lt;li&gt;Data Retention: We often have no control over how long their data is stored on the server or how it's used. People can use your data for AI training with the lack of high quality data.&lt;/li&gt;
&lt;li&gt;Trust Issues: We must trust the service provider not to misuse their data or fall victim to data breaches.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Lack of Control:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Limited Customization: Server-based tools often offer one-size-fits-all solutions with limited customization options.&lt;/li&gt;
&lt;li&gt;Dependency on Service: We rely on the availability and reliability of the conversion service.&lt;/li&gt;
&lt;li&gt;Potential for Data Loss: Network issues during upload or download can result in data loss or corruption.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Speed and Bandwidth Limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upload Time: Large files can take significant time to upload, especially on slower connections.&lt;/li&gt;
&lt;li&gt;Server Processing Time: High server load can lead to delays in processing.&lt;/li&gt;
&lt;li&gt;Download Time: Converted files need to be downloaded, adding more time to the process.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Costs and Limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service Fees: Many online conversion tools charge fees, especially for larger files or frequent use.&lt;/li&gt;
&lt;li&gt;File Size Limits: Free services often impose strict limits on file sizes.&lt;/li&gt;
&lt;li&gt;Conversion Quotas: There may be restrictions on the number of conversions allowed.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Given these challenges, browser-based conversions offer several compelling advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Privacy: Files don't need to be uploaded to a server, reducing security risks.&lt;/li&gt;
&lt;li&gt;Control: We have full control over their data and the conversion process.&lt;/li&gt;
&lt;li&gt;Speed: Client-side processing can be faster, especially for large files.&lt;/li&gt;
&lt;li&gt;Offline capability: Conversions can work without an internet connection.&lt;/li&gt;
&lt;li&gt;Accessibility: We can convert files from any device with a modern web browser.&lt;/li&gt;
&lt;li&gt;Reduced server load: Processing happens on the user's device, saving server resources.&lt;/li&gt;
&lt;li&gt;Cost-effective: No need for expensive server infrastructure or bandwidth costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Browser-based conversions leverage the power of modern web technologies to perform complex file manipulations directly in the user's browser. This approach not only addresses the privacy and control issues associated with server-based conversions but also offers a more seamless and efficient user experience.&lt;/p&gt;

&lt;p&gt;I think all the online converters doing server conversions should ditch the server conversions and give us browser based conversions as there is more control. Sometimes you need to convert sensitive documents and browser based conversions are a very safe way to do them. &lt;/p&gt;

&lt;p&gt;Let's explore how modern web technologies enable these powerful in-browser conversions.&lt;/p&gt;

&lt;h3&gt;
  
  
  JavaScript Example: Converting HEIC to JPEG
&lt;/h3&gt;

&lt;p&gt;We'll start with a common use case: converting HEIC images (common in iOS devices) to the more widely supported JPEG format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;heic2any&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;heic2any&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;convertHeicToJpeg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;heicFile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jpegBlob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;heic2any&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;heicFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;toType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;image/jpeg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;quality&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createObjectURL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jpegBlob&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Conversion failed:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation:&lt;/p&gt;

&lt;p&gt;We import the &lt;code&gt;heic2any&lt;/code&gt; library, which handles the conversion.&lt;br&gt;
The &lt;code&gt;convertHeicToJpeg&lt;/code&gt; function is asynchronous, allowing non-blocking operation.&lt;br&gt;
We use &lt;code&gt;heic2any&lt;/code&gt; to convert the HEIC file to a JPEG blob.&lt;br&gt;
The quality parameter (0.8) balances image quality and file size.&lt;br&gt;
We create a URL for the resulting JPEG blob, which can be used to display or download the image.&lt;/p&gt;

&lt;p&gt;Usage example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fileInput&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fileInput&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;fileInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;change&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;heicFile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;files&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jpegUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;convertHeicToJpeg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;heicFile&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;img&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jpegUrl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;appendChild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;img&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code sets up a file input and displays the converted JPEG image when a HEIC file is selected.&lt;/p&gt;

&lt;h3&gt;
  
  
  JavaScript Example: PDF to PNG Conversion
&lt;/h3&gt;

&lt;p&gt;Next, let's look at converting a PDF page to a PNG image, which can be useful for previews or sharing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;pdfjsLib&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pdfjs-dist&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;convertPdfToImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pdfFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;pageNumber&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;arrayBuffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pdfFile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;arrayBuffer&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pdf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pdfjsLib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getDocument&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;arrayBuffer&lt;/span&gt; &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nx"&gt;promise&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pdf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getPage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pageNumber&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;scale&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;viewport&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getViewport&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;scale&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;canvas&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2d&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;viewport&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;viewport&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;renderContext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;canvasContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;viewport&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;viewport&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;renderContext&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;promise&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toBlob&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createObjectURL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;image/png&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation:&lt;/p&gt;

&lt;p&gt;We use the pdf.js library to handle PDF parsing and rendering.&lt;br&gt;
The function takes a PDF file and an optional page number.&lt;br&gt;
We create a canvas element to render the PDF page.&lt;br&gt;
The scale factor (1.5) determines the resolution of the output image.&lt;br&gt;
After rendering the page to the canvas, we convert it to a PNG blob.&lt;br&gt;
Finally, we create a URL for the PNG image.&lt;/p&gt;

&lt;p&gt;This conversion is particularly useful for generating thumbnails or previews of PDF documents directly in the browser, improving user experience in document management systems or file sharing platforms.&lt;/p&gt;
&lt;h3&gt;
  
  
  JavaScript Example: CSV to JSON Conversion
&lt;/h3&gt;

&lt;p&gt;Converting CSV to JSON is a common task in data processing and analysis. Here's how we can do it in the browser:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Papa&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;papaparse&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;convertCsvToJson&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;csvFile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Papa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;csvFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;complete&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jsonData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;row&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;obj&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{};&lt;/span&gt;
          &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fields&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;field&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;field&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
          &lt;span class="p"&gt;});&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jsonData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;header&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation:&lt;/p&gt;

&lt;p&gt;We use the Papa Parse library, which is excellent for CSV parsing.&lt;br&gt;
The header: true option tells Papa Parse to use the first row as field names.&lt;br&gt;
We transform the parsed data into an array of objects, where each object represents a row.&lt;br&gt;
The resulting JSON data can be easily manipulated or displayed in the browser.&lt;/p&gt;

&lt;p&gt;This conversion is valuable for data analysis tools, allowing We to upload CSV files and work with the data in a more structured JSON format without server involvement.&lt;/p&gt;
&lt;h3&gt;
  
  
  WebAssembly Example: WebP to PNG Conversion
&lt;/h3&gt;

&lt;p&gt;WebAssembly allows us to use high-performance libraries compiled from languages like C or C++. Here's an example using libwebp:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;WEBP&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@saschazar/wasm-webp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;webpModule&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;initWebPModule&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;webpModule&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nc"&gt;WEBP&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;convertWebPToPNG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;webpFile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;webpModule&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;initWebPModule&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;arrayBuffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;webpFile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;arrayBuffer&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;webpData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Uint8Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;arrayBuffer&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;webpModule&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getInfo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;webpData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rgbaData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;webpModule&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;webpData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;canvas&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2d&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;imageData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createImageData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;imageData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rgbaData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;putImageData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;imageData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toBlob&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createObjectURL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;image/png&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation:&lt;/p&gt;

&lt;p&gt;We use a WebAssembly module wrapping libwebp for high-performance decoding.&lt;br&gt;
The module is initialized asynchronously when needed.&lt;br&gt;
We decode the WebP image to raw RGBA data using the WebAssembly module.&lt;br&gt;
The decoded data is then drawn onto a canvas and converted to a PNG.&lt;/p&gt;

&lt;p&gt;This WebAssembly-based conversion showcases how browsers can leverage native-speed libraries for complex tasks like image processing, providing performance comparable to desktop applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Browser-based file conversions offer us the convenience of performing complex file manipulations without leaving their browser or installing additional software. This approach not only enhances user experience but also reduces server load and addresses privacy concerns by keeping sensitive data on the client-side.&lt;br&gt;
As web technologies continue to evolve, we can expect even more powerful and diverse file conversion capabilities directly in the browser.&lt;/p&gt;

&lt;p&gt;I also implemented a few browser based conversions in my website &lt;a href="//onlineheicconvert.com"&gt;HEIC Converter&lt;/a&gt; you can check it out. Once the website loads you can even turn off the internet still everything will work.&lt;/p&gt;

</description>
      <category>browser</category>
      <category>javascript</category>
      <category>typescript</category>
      <category>webassembly</category>
    </item>
    <item>
      <title>Launch Announcement: Optimize Your AI Experience with "Prompt Engineering Patterns" Email Course 🚀</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Thu, 06 Jul 2023 12:55:13 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/launch-announcement-optimize-your-ai-experience-with-prompt-engineering-patterns-email-course-58lp</link>
      <guid>https://dev.to/ajitsinghkaler/launch-announcement-optimize-your-ai-experience-with-prompt-engineering-patterns-email-course-58lp</guid>
      <description>&lt;h1&gt;
  
  
  Launch Announcement: Optimize Your AI Experience with "Prompt Engineering Patterns" Email Course 🚀
&lt;/h1&gt;

&lt;p&gt;Today, I'm thrilled to introduce the latest addition to the world of AI learning: the "Prompt Engineering Patterns" email course. This in-depth 14-day email course is specifically designed to help you optimize your experience with AI, with a concentrated focus on OpenAI's ChatGPT.&lt;/p&gt;

&lt;p&gt;Here's why this course is an essential tool in your AI toolbox:&lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Empower your ChatGPT Interactions:&lt;/strong&gt; Learn how to unlock the full potential of advanced language models like ChatGPT.&lt;/p&gt;

&lt;p&gt;2️⃣ &lt;strong&gt;AI Optimization Techniques:&lt;/strong&gt; Gain practical knowledge on understanding, manipulating, and optimizing AI models to enhance their performance.&lt;/p&gt;

&lt;p&gt;3️⃣ &lt;strong&gt;AI Learning for All:&lt;/strong&gt; The course is crafted to cater to everyone, from beginners to experienced AI professionals.&lt;/p&gt;

&lt;h2&gt;
  
  
  The best part? This comprehensive AI learning resource is absolutely FREE! 🎁
&lt;/h2&gt;

&lt;p&gt;In the evolving digital world, it's not the advent of AI that might challenge us, but the inability to operate and optimize AI could leave us behind. Therefore, enhancing your AI skills is crucial for both personal and professional growth.&lt;/p&gt;

&lt;p&gt;Eager to dive in? You can sign up for the "Prompt Engineering Patterns" email course right here: &lt;a href="https://prompting.aicygnus.com/"&gt;https://prompting.aicygnus.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to reach out if you need more information or have any queries. Let's venture into the exciting future of AI together!&lt;/p&gt;

&lt;p&gt;I'm looking forward to welcoming you to the course. Let's optimize our AI learning and ensure we stay at the forefront of technological advancement!&lt;/p&gt;

&lt;p&gt;If you've already signed up or have experiences with similar AI courses, please share your thoughts and experiences in the comments below. Let's get a conversation going about AI learning, ChatGPT, and the future of AI.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>Caching for performance with Amazon DocumentDB and Amazon ElastiCache</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Sun, 28 Aug 2022 12:33:00 +0000</pubDate>
      <link>https://dev.to/ajitsinghkaler/move-4j2p</link>
      <guid>https://dev.to/ajitsinghkaler/move-4j2p</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;I move the AWS sample app from using two databases to a single database. Earlier they were using Elasticache for cacheing and DynamoDb for data persistence bu adding Redis as the primary DB I have removed one file of code and made it super simple to read and manage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N836V9Z4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbq928c67660ylya8eht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N836V9Z4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbq928c67660ylya8eht.png" alt="Image description" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f3MIED7O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ij30wzsxqnlpaf1m1224.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f3MIED7O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ij30wzsxqnlpaf1m1224.png" alt="Image description" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;Minimalism Magicians&lt;/p&gt;

&lt;h3&gt;
  
  
  Language Used
&lt;/h3&gt;

&lt;p&gt;Node.js&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Code
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ajitsinghkaler"&gt;
        ajitsinghkaler
      &lt;/a&gt; / &lt;a href="https://github.com/ajitsinghkaler/amazon-documentdb-and-amazon-elacticache-caching-for-performance-example"&gt;
        amazon-documentdb-and-amazon-elacticache-caching-for-performance-example
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Caching for performance with Amazon DocumentDB and Amazon ElastiCache
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
Caching for performance with Amazon DocumentDB and Amazon ElastiCache&lt;/h1&gt;
&lt;p&gt;I move the AWS sample app from using two databases to a single database. Earlier they were using Elasticache for cacheing and DynamoDb for data persistence bu adding Redis as the primary DB I have removed one file of code and made it super simple to read and manage.&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://user-images.githubusercontent.com/39260684/187074185-3919221f-f4e5-40f6-8f0c-a16d53f136ec.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zy-zPNk8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/39260684/187074185-3919221f-f4e5-40f6-8f0c-a16d53f136ec.png" alt="Screenshot from 2022-08-28 17-59-07"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://user-images.githubusercontent.com/39260684/187074188-2309b2a2-0f8a-4789-afe1-6c6c7b72b510.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9fQWPmZe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/39260684/187074188-2309b2a2-0f8a-4789-afe1-6c6c7b72b510.png" alt="Screenshot from 2022-08-28 17-58-41"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
How it works&lt;/h2&gt;
&lt;p&gt;It saves songs to redis database on the &lt;code&gt;/cd&lt;/code&gt; endpoint. Using the &lt;code&gt;createEntity&lt;/code&gt; and &lt;code&gt;save&lt;/code&gt; function&lt;/p&gt;
&lt;p&gt;It can also search songs from the database on the &lt;code&gt;/cd/:title&lt;/code&gt; endpoint. USing redis &lt;code&gt;search&lt;/code&gt;, &lt;code&gt;where&lt;/code&gt; and &lt;code&gt;equal&lt;/code&gt; function.&lt;/p&gt;
&lt;h3&gt;
How the data is stored:&lt;/h3&gt;
&lt;p&gt;Data is stored using the create entity function. Schema is the following&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;{
    title: { type: 'string' },
    singer: { type: 'string' },
    text: { type: 'text' },
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h3&gt;
How the data is accessed:&lt;/h3&gt;
&lt;p&gt;The data is access using Redis Seach using the search function and the where…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ajitsinghkaler/amazon-documentdb-and-amazon-elacticache-caching-for-performance-example"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;





&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Check out &lt;a href="https://redis.io/docs/stack/get-started/clients/#high-level-client-libraries"&gt;Redis OM&lt;/a&gt;, client libraries for working with Redis as a multi-model database.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Use &lt;a href="https://redis.info/redisinsight"&gt;RedisInsight&lt;/a&gt; to visualize your data in Redis.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Sign up for a &lt;a href="https://redis.info/try-free-dev-to"&gt;free Redis database&lt;/a&gt;.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>redishackathon</category>
    </item>
    <item>
      <title>AWS Elastic Beanstalk - Hands On</title>
      <dc:creator>Ajit Singh</dc:creator>
      <pubDate>Wed, 29 Sep 2021 03:18:30 +0000</pubDate>
      <link>https://dev.to/playfulprogramming/aws-elastic-beanstalk-hands-on-1gji</link>
      <guid>https://dev.to/playfulprogramming/aws-elastic-beanstalk-hands-on-1gji</guid>
      <description>&lt;p&gt;To create a stack on AWS bean stalk follow the steps below&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Search for beanstalk in the Search bar after logging in &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkaam95jxtyo4nvrp8e1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkaam95jxtyo4nvrp8e1o.png" alt="alt text" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on create application on the homepage&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xcuclrlfimfnopukxs8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xcuclrlfimfnopukxs8.png" alt="alt text" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select an application name I selected &lt;code&gt;test-beanstalk-app&lt;/code&gt; and tags you want to attach to this app&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8yf42k7utp2qt4xhsyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8yf42k7utp2qt4xhsyl.png" alt="alt text" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After that on platform select &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Platform- Language with which you want to deploy your app &lt;/li&gt;
&lt;li&gt;Select version of node.js I selected the automatically selected one&lt;/li&gt;
&lt;li&gt;Platform version- Version of AWS platform that you want to select as different versions support deifferent node.js engines&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffduethjv4k96nc0emif3.png" alt="alt text" width="800" height="541"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In application code select sample application for a test app. If you want to upload your own code select upload your code &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8l6rtiihzdhsus8x9zv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8l6rtiihzdhsus8x9zv.png" alt="alt text" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on create application and wait for a few minutes after a few logs you will see the following screen&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fum317go8q6u4rpjt5nkm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fum317go8q6u4rpjt5nkm.png" alt="alt text" width="800" height="541"&gt;&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the app link highlighted in the previous image to have a look at your demo app.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0pzek97i9dj9epo50qc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0pzek97i9dj9epo50qc.png" alt="alt text" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After that in configuration you can check all the things that were setup by AWS beanstalk&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffixajx8n2xaj38qjmuvt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffixajx8n2xaj38qjmuvt.png" alt="alt text" width="800" height="1130"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can create various environments in the environments like creating develop, production etc.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8dp0f36utk7sux6tum2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8dp0f36utk7sux6tum2.png" alt="alt text" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally delete your app using the applications tab and select application and after that click on actions and click delete application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxesnl92a4mi0adklhmqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxesnl92a4mi0adklhmqr.png" alt="alt text" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are done with creating a platform in Beanstalk next will study how to create your own pipelines in AWS&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
