<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Prabusah</title>
    <description>The latest articles on DEV Community by Prabusah (@prabusah_53).</description>
    <link>https://dev.to/prabusah_53</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prabusah_53"/>
    <language>en</language>
    <item>
      <title>AWS Parameter and Secrets Lambda extension - Node.js example</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Sat, 26 Nov 2022 15:18:59 +0000</pubDate>
      <link>https://dev.to/prabusah_53/aws-parameter-and-secrets-lambda-extension-nodejs-example-37h0</link>
      <guid>https://dev.to/prabusah_53/aws-parameter-and-secrets-lambda-extension-nodejs-example-37h0</guid>
      <description>&lt;h3&gt;
  
  
  TLDR;
&lt;/h3&gt;

&lt;p&gt;This blog walks through how to access values stored in AWS Systems Manager Parameter Store via Lambda extension using Node.js code.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Lambda extension:
&lt;/h3&gt;

&lt;p&gt;AWS releases Lambda extensions as Layer to make developers life easier by helping them integrate Lambda with other AWS Services features (like AppConfig, AWS Systems Manager Parameter Store etc.).&lt;/p&gt;

&lt;h3&gt;
  
  
  How Lambda extension works:
&lt;/h3&gt;

&lt;p&gt;Lambda lifecycle has 3 phases: init, invoke and shutdown.&lt;br&gt;
&lt;em&gt;Init phase&lt;/em&gt; - Combination of Extension INIT, Runtime INIT and Function INIT. Extension setup happens during Extension INIT phase.&lt;br&gt;
&lt;em&gt;Invoke phase&lt;/em&gt; - Extension exposes HTTP endpoint that can be called from Lambda function runtime. &lt;br&gt;
&lt;em&gt;Shutdown phase&lt;/em&gt; - Extension runtime shutdown along with Lambda function runtime.&lt;/p&gt;
&lt;h4&gt;
  
  
  Why use AWS Systems Manager Parameter Store:
&lt;/h4&gt;

&lt;p&gt;To store connection details, credentials or keys etc.&lt;/p&gt;
&lt;h4&gt;
  
  
  How AWS Parameters and Secrets Lambda extension works:
&lt;/h4&gt;

&lt;p&gt;Provides in-memory cache for parameters and secrets. Upon Lambda requesting a parameter, the extension fetches the parameter data from local cache, if available. If data not in cache or stale, the extension fetches parameter value from AWS Systems Manager service. This reduces aws-sdk initialization, API calls, reduces cost and improves application performance.&lt;/p&gt;
&lt;h4&gt;
  
  
  Nodejs example:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const http = require('http');

let getParameterValue = function(paramName) {
    const headers = {
        "X-Aws-Parameters-Secrets-Token': process.env.AWS SESSION TOKEN
    }

    let options = {
        host: "localhost',
        port: '2773',
        path: `/systemsmanager/parameters/get?name=${paramName}`,
        method: 'GET',
        headers: headers
    }

    return new Promise((resolve, reject) =&amp;gt; {
        const req = http.get(options, (res) =&amp;gt; {
            if (res.statusCode &amp;lt; 200 || res.statusCode &amp;gt;= 300) {
                return reject(new Error('statusCode=' + res.statusCode));
            }
            var body = [];
            res.on('data', function(chunk) {
                body.push(chunk);
            });
            res.on('end', function() {
                resolve(Buffer.concat(body).toString());
            });
        });
        rea.on('error', (e) =&amp;gt; {
            reject(e.message);
        });
        req.end();
    });
};

exports.handler = async (event) =&amp;gt; {
    let pass = await getParameterValue('/serivce/password');
    let passValue = JSON.parse(pass).Parameter.Value;
    //passValue has the password value
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Code walkthrough:
&lt;/h4&gt;

&lt;p&gt;AWS Parameters and Secrets Lambda extension exposes HTTP endpoint localhost under 2773 port to Lambda function runtime. AWS SESSION_TOKEN is an in-built environment variable populated by AWS internally. If this secret token not passed to HTTP endpoint - a 401 error will occur.&lt;/p&gt;
&lt;h4&gt;
  
  
  Parameter store Securestring value retrieval using extension:
&lt;/h4&gt;

&lt;p&gt;Just add '&lt;strong&gt;&amp;amp;withDecryption=true&lt;/strong&gt;' to the suffix of options objects path field-given below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let options = {
        host: 'localhost',
        path: `/systemsmanager/parameters/get?name=${paramName}&amp;amp;withDecryption=true`,
        port: '2773', 
        headers: headers
        method: 'GET',
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Image by &lt;a href="https://pixabay.com/users/radekkulupa-1045852/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=1380134"&gt;Radosław Kulupa&lt;/a&gt; from &lt;a href="https://pixabay.com//?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=1380134"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>lambdaextension</category>
      <category>parameterstore</category>
      <category>lambda</category>
      <category>extension</category>
    </item>
    <item>
      <title>AWS Lambda (Node.js) calling SOAP service</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Tue, 22 Nov 2022 16:30:30 +0000</pubDate>
      <link>https://dev.to/prabusah_53/aws-lambda-nodejs-calling-soap-service-28m3</link>
      <guid>https://dev.to/prabusah_53/aws-lambda-nodejs-calling-soap-service-28m3</guid>
      <description>&lt;h3&gt;
  
  
  REST service:
&lt;/h3&gt;

&lt;p&gt;Uses HTTP for exchanging information between systems in several ways such as JSON, XML, Text etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  SOAP service:
&lt;/h3&gt;

&lt;p&gt;A Protocol for exchanging information between systems over internet only using XML.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirement:
&lt;/h3&gt;

&lt;p&gt;Calling SOAP services from Lambda.&lt;br&gt;
We'll use this npm package: &lt;a href="https://www.npmjs.com/package/soap" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/soap&lt;/a&gt;&lt;br&gt;
With this npm package - its a 3 step process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a soap client&lt;/li&gt;
&lt;li&gt;Call SOAP method by passing JSON input. &lt;/li&gt;
&lt;li&gt;Convert XML output to JSON&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;
  
  
  Create SOAP client:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const soap = require('soap');
let client = await soap.createClientAsync(url);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;em&gt;url&lt;/em&gt; - is the wsdl url. For example: &lt;a href="http://sampledomainname/services/sampleserice?wsdl" rel="noopener noreferrer"&gt;http://sampledomainname/services/sampleserice?wsdl&lt;/a&gt; (just for representation - this wsdl url does not works).&lt;/p&gt;

&lt;p&gt;wsdl doc has all the details. But if you are like me who can best understand JSON data then use &lt;em&gt;client.describe()&lt;/em&gt; to get soap service details in JSON format.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;console.log(client.describe());&lt;br&gt;
Output log below:&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "SomeService": {
    "SomeServicePort": {
      "someServiceMethod": {
        "input": {
          "serviceRequest": {
            "field1": "xsd:string"
          }
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Convert JSON to XML (pass input to SOAP)&lt;/em&gt;: soap npm package takes care of this conversion. Pass JSON, &amp;amp; converts to XML.&lt;/p&gt;

&lt;h4&gt;
  
  
  Call SOAP method by passing JSON input:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client.SomeService.SomeServicePort_http.someServiceMethod(args, function(err, result) {
    if(err) console.error("Error - ", err); 
    console.log(result); // is a javascript object
}, { postProcess: function(_xml) { 
    console.log('XML - ', _xml); //this prints the input XML
    return _xml.replace('test', 'newtest'); // any mapping or string conversion, add here.
}});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Consider this args JSON as input to soap method&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Let args = {&lt;br&gt;
"one":"1"&lt;br&gt;
}&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The args JSON would be converted by soap npm package as below XML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version=\"1.0\" encoding=\"utf-8\"?&amp;gt;
 &amp;lt;soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema- instance\"&amp;gt;
    &amp;lt;soap:Body&amp;gt;
        &amp;lt;ns1:someServiceMethod&amp;gt;
            &amp;lt;ns1:one&amp;gt;1&amp;lt;/ns1:one&amp;gt;
        &amp;lt;/ns1:someServiceMethod&amp;gt;
    &amp;lt;/soap:Body&amp;gt;
 &amp;lt;/soap:Envelope&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we discuss about response XML to JSON (result of soap call). Let's see how to use async/await way of calling soap method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let resultArr = await client.someServiceMethodAsync(args);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just add suffix &lt;em&gt;Async&lt;/em&gt; to the soap method you wish to call, would return a promise. Note the above call results an array with 4 elements in both success and failure scenarios.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Array element 0: result is a javascript object. &lt;br&gt;
Array element 1: rawResponse is the raw xml response string.&lt;br&gt;
Array element 2: soapHeader is the response soap header as a javascript object (contains WS Security info-so let us not log this!). &lt;br&gt;
Array element 3: rawRequest is the raw xml request string.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So the Array element O is already converted the XML response to JSON. Let's put altogether below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const soap= require('soap');
let client = await soap.createClientAsync("http://sampledomainname/services/sampleserice?wsdl");
let resultArr= await client.someServiceMethodAsync(args); console.log("Response JSON-, resultArr[0]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Secured Soap Call:
&lt;/h3&gt;

&lt;p&gt;All enterprise soap services would require authentication and in below section let's discuss about WSSecurity.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;let wsSecurity = new soap. WSSecurity('username', 'password", options); client.setSecurity(wsSecurity):&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Putting altogether:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const soap = require('soap');
let client = await soap.createClientAsync('http://sampledomainname/services/sampleserice?wsdl'); 
let options = {
  hasNonce: true,
  actor: 'actor'
}
let wsSecurity = new soap.WSSecurity('username', 'password', options); 
client.setSecurity(wsSecurity); 
let resultArr = await client.someServiceMethodAsync(args); 
console.log('Response JSON-', resultArr[0]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Image by &lt;a href="https://pixabay.com/users/splitshire-364019/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=407081" rel="noopener noreferrer"&gt;SplitShire&lt;/a&gt; from &lt;a href="https://pixabay.com//?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=407081" rel="noopener noreferrer"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Amazon EBS - Primer</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Fri, 18 Nov 2022 17:10:40 +0000</pubDate>
      <link>https://dev.to/prabusah_53/amazon-ebs-primer-5ao6</link>
      <guid>https://dev.to/prabusah_53/amazon-ebs-primer-5ao6</guid>
      <description>&lt;h3&gt;
  
  
  Amazon Elastic Block Storage:
&lt;/h3&gt;

&lt;p&gt;This is raw storage in storage devices such as Hard disk drives (HDDs), Solid state drives (SSDs) and Non-Volatile Memory Express (NVMe) that uses disk or volume.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;HDD&lt;/em&gt;: Legacy technology. Data stored in spinning disks. Read/write speed of 80MB/s to 160 MB/s. &lt;br&gt;
&lt;em&gt;SSD&lt;/em&gt;: Data stored in integrated circuits. Read/write speed of 600 MB/s.&lt;br&gt;
&lt;em&gt;NVMe&lt;/em&gt;: Data stored in integrated circuits. Read/write speed of 3.5 GB/s.&lt;/p&gt;

&lt;h3&gt;
  
  
  Blocks and Volumes:
&lt;/h3&gt;

&lt;p&gt;Disk or volume to be formatted as continuous blocks. &lt;em&gt;Block&lt;/em&gt; - Fixed storage unit to store data. &lt;em&gt;Volume&lt;/em&gt; - Block storage devices can be combined into larger logical units called volumes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Components of Block storage:
&lt;/h3&gt;

&lt;p&gt;Contains 3 components: Block storage, compute system and operating system (OS). Block storage attached to compute system. OS identifies the block storage and formats to make it ready for use.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS block storage:
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Instance storage:&lt;/em&gt; Ephemeral (temporary) storage that is non-persistent and terminated when associated EC2 instance is terminated. Use for buffers, caches or other temporary content.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Amazon EBS storage:&lt;/em&gt; Persistent store. If an EC2 instance goes down, volume and data on volume remain available to attach to different EC2 instance.&lt;/p&gt;

&lt;p&gt;Block storage service designed for use with Amazon EC2. EBS volumes suited for file systems, databases or any application that requires access to raw, block- level storage. Best for random reads/writes (DBs) and long, sequential, reads/writes as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Snapshots:
&lt;/h3&gt;

&lt;p&gt;Point-in-time copies of data in EBS volumes. Backup from EBS to S3. Incremental copies (only blocks on EBS volumes that changed after most recent snapshot are saved).&lt;br&gt;
Delete snapshot-only data unique to that snapshot removed.&lt;br&gt;
Snapshot of encrypted volumes auto encrypted. Copy snapshots across Regions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Snapshot Use Cases:
&lt;/h3&gt;

&lt;p&gt;Host Microsoft Sharepoint, SAP, Exchange server etc. Bring your relational/non-relational DB into EBS attached to EC2. Bring your file system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Data Lifecycle Manager (DLM):
&lt;/h3&gt;

&lt;p&gt;DLM used to automate creation, retention and deletion of snapshots to backup your EBS volumes.&lt;/p&gt;

&lt;h3&gt;
  
  
  EBS Availability:
&lt;/h3&gt;

&lt;p&gt;AWS auto replicates EBS volume within the AZ to prevent failure of single hardware component. But what if that AZ itself down? It is recommended to create snapshots of EBS volumes frequently. Snapshot replicated across all AZs within a Region. Snapshots can also be copied to other Regions.&lt;/p&gt;

&lt;h3&gt;
  
  
  EBS Types:
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;SSD&lt;/em&gt;: (gp2, gp1, io1, io2) and &lt;em&gt;HDD&lt;/em&gt;: (st1, sc1). &lt;br&gt;
&lt;em&gt;iops&lt;/em&gt;: Input Output per second.&lt;br&gt;
&lt;em&gt;gp&lt;/em&gt;: general purpose; &lt;br&gt;
&lt;em&gt;io&lt;/em&gt;: Provisioned IOPS.&lt;br&gt;
&lt;em&gt;st1&lt;/em&gt;: Throughput Optimized HDD. &lt;br&gt;
&lt;em&gt;sc1&lt;/em&gt;: Cold HDD.&lt;/p&gt;

&lt;h3&gt;
  
  
  EBS Pricing:
&lt;/h3&gt;

&lt;p&gt;Pay for Provisioned volume size, IOPS and throughput performance. &lt;br&gt;
&lt;em&gt;Snapshot Pricing:&lt;/em&gt; Actual amount of storage space consumed (not provisioned).&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Architecture:
&lt;/h3&gt;

&lt;p&gt;An EC2 instance can have multiple EBS volumes (and different EBS volume types as well) attached. EC2 &amp;amp; EBS must reside in same AZ. Each EBS volume --&amp;gt; snapshots stored in S3 within same Region where EBS volume resides.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advanced Architecture - Multi Attach:
&lt;/h3&gt;

&lt;p&gt;Multiple EC2 instances connected to a single EBS volume. Data consistency to be managed by your application or OS environment. Multi-Attach supported only with Provisioned IOPS SSD (io1, io2) EBS volume types. EC2 &amp;amp; EBS must reside in same AZ. EC2 &amp;amp; EBS must reside in same AZ. Each EBS volume --&amp;gt; snapshots stored in S3 within same region where EBS volume resides.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advanced Architecture-Striped volumes:
&lt;/h3&gt;

&lt;p&gt;Multiple EBS volumes operate as single EBS volume attached to a single EC2 instance. EC2 &amp;amp; EBS must reside in same AZ. Each EBS volume--&amp;gt; snapshots stored in S3 within same Region where EBS volume resides.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  IAM:
&lt;/h4&gt;

&lt;p&gt;Policy created to allow users, groups, roles to access EC2 and EBS resources.&lt;/p&gt;

&lt;h4&gt;
  
  
  Encryption:
&lt;/h4&gt;

&lt;p&gt;Occurs on servers that host EC2 instances, both data at rest and data in transit between EC2 and EBS are encrypted. Both encrypted/unencrypted volumes can be attached to an EC2 instance.&lt;br&gt;
Data inside EBS volume, Data moving between EBS and EC2 instance. Snapshots created out of EBS volume and EBS volumes created out of snapshots-all can be encrypted.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Backup:
&lt;/h3&gt;

&lt;p&gt;Deploy backup policy across AWS accounts in Organization for services like EC2, EBS, RDS etc. Like snapshots AWS Backup also stores data backups of EBS in S3 bucket.&lt;br&gt;
AWS Backup backups many services including EBS... whereas snapshot is to backup only EBS volume data. AWS Backup offers more features compared to snapshots.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Compute Optimizer:
&lt;/h3&gt;

&lt;p&gt;EBS sends data points (metrics) to Amazon CloudWatch. 1-minute metrics. Compute Optimizer uses Amazon CloudWatch metrics to analyze your EBS volumes and provide recommendations to assist you in optimizing your Amazon EBS costs.&lt;br&gt;
(CloudWatch notifies events based on EBS changes like creation of volumes or snapshot etc.)&lt;/p&gt;

&lt;p&gt;Image by &lt;a href="https://pixabay.com/users/greyerbaby-2323/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=447703"&gt;lisa runnels&lt;/a&gt; from &lt;a href="https://pixabay.com//?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=447703"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ebs</category>
      <category>blockstorage</category>
      <category>ebsvolume</category>
      <category>ebssnapshot</category>
    </item>
    <item>
      <title>Amazon S3 - Pricing</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Sat, 12 Nov 2022 13:19:17 +0000</pubDate>
      <link>https://dev.to/prabusah_53/amazon-s3-pricing-8o2</link>
      <guid>https://dev.to/prabusah_53/amazon-s3-pricing-8o2</guid>
      <description>&lt;h3&gt;
  
  
  Storage:
&lt;/h3&gt;

&lt;p&gt;Pay for storing the data based on Object size, How long stored during a month, storage classes, monitoring &amp;amp; automation fee per object for S3 Intelligent-Tiering, per-request ingest fees while PUT, COPY or lifecycle rules moving to any S3 storage class.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requests and data retrieval
&lt;/h3&gt;

&lt;p&gt;Pay for API calls such as GET, LIST and browsing Amazon console.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data transfer
&lt;/h3&gt;

&lt;p&gt;Charged for bandwidth into and out of S3.&lt;br&gt;
Except:&lt;br&gt;
Data transferred in from internet.&lt;br&gt;
Data transferred between S3 buckets in same region.&lt;br&gt;
Data transferred out from S3 bucket to any AWS Services in same region.&lt;br&gt;
Data transferred out to CloudFront.&lt;/p&gt;

&lt;h3&gt;
  
  
  Management
&lt;/h3&gt;

&lt;p&gt;Pay for Inventory, Analytics, S3 Lens, S3 Batch Operations, S3 Select and Object tagging.&lt;br&gt;
Pay for Replication and Object Lambda.&lt;/p&gt;

&lt;p&gt;Image by &lt;a href="https://pixabay.com/users/jaho-1265252/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=4354598"&gt;Ja!&lt;/a&gt; from &lt;a href="https://pixabay.com//?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=4354598"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>s3pricing</category>
      <category>s3cost</category>
      <category>s3costestimation</category>
      <category>s3pricecalculation</category>
    </item>
    <item>
      <title>Amazon S3 - Managing</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Sat, 12 Nov 2022 07:26:28 +0000</pubDate>
      <link>https://dev.to/prabusah_53/amazon-s3-managing-50d3</link>
      <guid>https://dev.to/prabusah_53/amazon-s3-managing-50d3</guid>
      <description>&lt;h2&gt;
  
  
  AWS S3 Managing:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Managing object tagging:
&lt;/h3&gt;

&lt;p&gt;Tagging used to Manage/search/filter resources (including S3 object).&lt;br&gt;
10 tags per object. 50 tags per bucket.&lt;br&gt;
Tag Key - 128 chars; Tag Value - 256 chars.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tagging in Access control&lt;/em&gt; - In IAM policy use tag and its value as condition for fine-grained permission to allow/deny actions. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tagging in Lifecycle management&lt;/em&gt; - use tag to filter subset of objects to apply rule.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tagging in Replication&lt;/em&gt; - filter by tag&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 Inventory:
&lt;/h3&gt;

&lt;p&gt;Get list of objects and corresponding metadata from a bucket or prefix on a daily/weekly basis. A less costly alternative is S3 inventory compared to List API. Use it to audit the replication/encryption/compliance/regulatory status of objects.&lt;br&gt;
It may take up to 48 hrs for inventory to delivery first report. Inventory can be queried using Amazon Athena. Create a table, load inventory and query.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 Select:
&lt;/h3&gt;

&lt;p&gt;Filter contents of S3 objects and retrieve subset of data instead of downloading entire object using SQL statements.&lt;br&gt;
Reduces transfer costs.&lt;br&gt;
Works on only CSV, JSON, Apache Parquet objects&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 Event Notification:
&lt;/h3&gt;

&lt;p&gt;Notifies object creation regardless of API used PUT, POST, COPY. Similarly event notification can be sent for DELETE, REPLICATION events etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 Batch Operations:
&lt;/h3&gt;

&lt;p&gt;To perform a single API action like Copy/Delete/Tags related etc. or invoke a Lambda to perform on a list of S3 objects (can be billions of objects) you specify.&lt;br&gt;
Auto retries on failure.&lt;br&gt;
S3 tracks progress, notifies and stores report of actions.&lt;br&gt;
Fully managed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon CloudWatch:
&lt;/h2&gt;

&lt;p&gt;Which objects are accessed the most or the least and who is accessing.&lt;/p&gt;

&lt;p&gt;Amazon S3 server access logging provides info about customer base.&lt;/p&gt;

</description>
      <category>s3inventory</category>
      <category>s3select</category>
      <category>s3batch</category>
      <category>s3tagging</category>
    </item>
    <item>
      <title>Amazon S3 - Business continuity and Disaster recovery</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Fri, 11 Nov 2022 18:31:41 +0000</pubDate>
      <link>https://dev.to/prabusah_53/amazon-s3-business-continuity-and-disaster-recovery-2ono</link>
      <guid>https://dev.to/prabusah_53/amazon-s3-business-continuity-and-disaster-recovery-2ono</guid>
      <description>&lt;h3&gt;
  
  
  Business Continuity:
&lt;/h3&gt;

&lt;p&gt;Keeps business functioning despite significant disruptive events. &lt;/p&gt;

&lt;h3&gt;
  
  
  Disaster Recovery:
&lt;/h3&gt;

&lt;p&gt;Natural or Human made event that causes an impact to business.&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 for Business Continuity and Disaster Recovery:
&lt;/h3&gt;

&lt;p&gt;S3 provides 99 point 11 nine's durability. Stored across min of 3 AZs (except S3 One Zone-IA storage class).&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 Object Lock:
&lt;/h3&gt;

&lt;p&gt;Immutable data (regulatory requirement). Replication - Increase availability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Versioning:
&lt;/h3&gt;

&lt;p&gt;Multiple variants of object.&lt;br&gt;
Recovery from unintended user actions and application failures.&lt;br&gt;
Overwrite creates new version. Deletion creates a delete marker instead of removing object.&lt;br&gt;
Default - unversioned. But once enabled, can't return to unversioned state. Versioning can be suspended to stop accruing new versions.&lt;/p&gt;

&lt;p&gt;Even in unversioned (default state)- all objects have version ID (null). Upon enabling versioning, the existing objects unchanged .ie. their version ID remains same (null). Delete Object (without versionID)-delete marker is set. And when we retrieve (current version) - 404 returned.&lt;/p&gt;

&lt;h3&gt;
  
  
  Removing delete markers:
&lt;/h3&gt;

&lt;p&gt;Delete (Object + versionId)&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 Lifecycle management:
&lt;/h3&gt;

&lt;p&gt;Transition actions, when objects transition to another S3 storage class.&lt;br&gt;
Expiration actions, when objects expire (versioning enabled)-S3 expires objects by adding delete marker. &lt;br&gt;
Best practice: Move non-current version to Glacier class then delete after 1 year.&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 Object lock:
&lt;/h3&gt;

&lt;p&gt;Only in versioned buckets.&lt;br&gt;
WORM-Write Once Read Many model. Prevent objects from deleted/overwritten for fixed time/indefinitely.&lt;br&gt;
&lt;em&gt;Retention period&lt;/em&gt; - time object can't be overwritten/deleted. &lt;em&gt;Legal holds&lt;/em&gt; - No expiration date.&lt;br&gt;
Configure bucket for Object Lock. Both can be at object level.&lt;/p&gt;

&lt;p&gt;Versioning auto enabled when you create bucket with S3 Object Lock. S3 Object lock protection also moved between storage classes during Lifecycle transitions.&lt;/p&gt;

&lt;p&gt;Indefinite locking-use Legal holds (because no retention period). Apply/change object lock operations for even billions of objects using $3 Batch operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Object Lock retention modes:
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Compliance mode&lt;/em&gt; - Immutable until retention period. No one can delete/overwrite including root user. Also retention period cannot be edited. Delete entire AWS account to delete the file.&lt;br&gt;
&lt;em&gt;Governance mode&lt;/em&gt; - Specific users given permission to alter retention settings/delete objects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Object Replication:
&lt;/h3&gt;

&lt;p&gt;Replicate all objects or subset (use prefix/tags).&lt;br&gt;
Replicates objects in same storage class as source object (default settings - but can specify different storage class for replicas).&lt;br&gt;
Default, replicates tags, Object Lock settings. 99.99% of objects replicated in seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 Multi-Region Access Points:
&lt;/h3&gt;

&lt;p&gt;Request --&amp;gt; Multi-Region Access Points --&amp;gt; Request routed to less latency (closest) region (enable cross-region replication)&lt;br&gt;
Region1&lt;br&gt;
Region2&lt;/p&gt;

</description>
      <category>s3series</category>
      <category>businesscontinuity</category>
      <category>disasterrecovery</category>
    </item>
    <item>
      <title>Amazon S3 - Scaling</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Fri, 11 Nov 2022 17:30:33 +0000</pubDate>
      <link>https://dev.to/prabusah_53/amazon-s3-scaling-5572</link>
      <guid>https://dev.to/prabusah_53/amazon-s3-scaling-5572</guid>
      <description>&lt;h3&gt;
  
  
  Amazon S3 Performance Optimization
&lt;/h3&gt;

&lt;p&gt;S3 Bucket Prefixes - Scale for high request rates. Amazon S3 supports up to 3,500 PUT/POST/DELETE and 5,500 GET transactions per second (TPS) per partitioned prefix.&lt;/p&gt;

&lt;p&gt;Insurance-bucket/Auto = 3,500 PUT/POST/DELETE and 5,500 GET TPS.&lt;/p&gt;

&lt;p&gt;Insurance-bucket/Life = 3,500 PUT/POST/DELETE and 5,500 GET TPS.&lt;/p&gt;

&lt;p&gt;If request to the prefix exceed the supported request rate would result in HTTP 503 errors. &lt;/p&gt;

&lt;p&gt;Avoid date based prefixes as that might result in exceeding request rate while the business grows per day basis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling connections horizontally:
&lt;/h3&gt;

&lt;p&gt;S3 is a large-scale distributed system. So we can make parallel requests. No limits on number of connections made to a bucket Write/retrieve data from S3 in parallel using multipart uploads.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Amazon CloudFront:
&lt;/h3&gt;

&lt;p&gt;Browser -&amp;gt;&amp;gt;&amp;gt; CF-&amp;gt;&amp;gt;&amp;gt; S3&lt;/p&gt;

</description>
      <category>s3performance</category>
      <category>s3series</category>
      <category>s3scaling</category>
      <category>horizontalscaling</category>
    </item>
    <item>
      <title>Amazon S3 - Data Lake</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Fri, 11 Nov 2022 17:08:39 +0000</pubDate>
      <link>https://dev.to/prabusah_53/amazon-s3-as-data-lake-3dfh</link>
      <guid>https://dev.to/prabusah_53/amazon-s3-as-data-lake-3dfh</guid>
      <description>&lt;h2&gt;
  
  
  Data Lake:
&lt;/h2&gt;

&lt;p&gt;A centralized repository that allows to migrate, store and manage all structured/unstructured data at unlimited scale. Once centralized, we can extract value and gain insights from data through analytics and ML. This makes the data available to more users across more lines of business - enables them to get insights they need.&lt;br&gt;
S3 is ideal for Data Lake as it provides unlimited scalability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Cataloging:
&lt;/h2&gt;

&lt;p&gt;On put of S3-&amp;gt; Use lambda to extract metadata -&amp;gt; DynamoDB and ElasticSearch -&amp;gt; then query the data.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Glue:
&lt;/h2&gt;

&lt;p&gt;Fully managed ETL service. It can organize, cleanse, validate and format data.&lt;/p&gt;

&lt;h2&gt;
  
  
  In-Place data querying:
&lt;/h2&gt;

&lt;p&gt;Without provisioning and managing servers/clusters we can transform/query the data. So no need to copy and load data into separate analytics platforms. Athena and Redshift Spectrum provide in-place querying of S3 data lake.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Athena:
&lt;/h2&gt;

&lt;p&gt;Interactive query service that analyze data directly in S3 using SQL Serverless. Pay for scanned data while running queries. Integrates with QuickSight for easy visualization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Redshift Spectrum:
&lt;/h2&gt;

&lt;p&gt;More complex queries with large number of data lake users can run concurrent workloads.&lt;/p&gt;

</description>
      <category>s3lake</category>
      <category>datalake</category>
      <category>s3series</category>
      <category>s3centralrepository</category>
    </item>
    <item>
      <title>Amazon S3 Storage Classes</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Tue, 01 Nov 2022 14:06:46 +0000</pubDate>
      <link>https://dev.to/prabusah_53/amazon-s3-storage-classes-34fo</link>
      <guid>https://dev.to/prabusah_53/amazon-s3-storage-classes-34fo</guid>
      <description>&lt;p&gt;Amazon S3 Storage classes broadly classified as below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frequently accessed tier&lt;/li&gt;
&lt;li&gt;Infrequently accessed tier &lt;/li&gt;
&lt;li&gt;Unknown or Changing access tier&lt;/li&gt;
&lt;li&gt;Archive tier&lt;/li&gt;
&lt;li&gt;Deep Archive tier&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Amazon S3 Standard (Frequently accessed):
&lt;/h3&gt;

&lt;p&gt;Default storage class. Data can be accessed in milliseconds and &lt;br&gt;
usually most frequently accessed data stored.&lt;br&gt;
Supports Resilient/Availability as data persisted in multiple AZs.&lt;br&gt;
No min/max storage duration. No min size. No retrieval fees.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 Standard-Infrequent Access (Infrequently accessed):
&lt;/h3&gt;

&lt;p&gt;Data can be accessed in milliseconds.&lt;br&gt;
Supports Resilient/Availability as data persisted in multiple AZs.&lt;br&gt;
To be stored minimum 30 days in this class. If an object deleted before 30 days, still charged for 30 days. Minimum file size 128 KB size. If uploading &amp;lt;128KB file, still charged for 128KB.&lt;br&gt;
Charged for data retrieval.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 One Zone-Infrequent Access (Infrequently accessed):
&lt;/h3&gt;

&lt;p&gt;Data can be accessed in millisecond access.&lt;br&gt;
Less Resilient / Less Availability as data persisted only in Single AZ (so less cost).&lt;br&gt;
To be stored minimum 30 days in this class. If an object deleted before 30 days, still charged for 30 days. Minimum file size 128 KB size. If uploading &amp;lt;128KB file, still charged for 128KB.&lt;br&gt;
Charged for data retrieval.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 Intelligent-Tiering (Unknown or Changing Access):
&lt;/h3&gt;

&lt;p&gt;Automated cost savings by monitoring and analyzing access patterns, moves the data to appropriate storage class.&lt;br&gt;
Supports Resilient/Availability as data persisted in multiple AZs.&lt;br&gt;
At first, uses the frequent access tier. Monitors access patterns and moves objects not accessed for 30 (min) consecutive days to Infrequent tier.&lt;br&gt;
Moves objects not accessed for 90 (min) consecutive days to Archive access tier (only when activated). Retrieval between 3-5 hrs. &lt;br&gt;
Moves objects not accessed for 180 (min) consecutive days to Deep Archive access tier (only when activated). Retrieval within 12 hrs. &lt;br&gt;
No min size. If uploading &amp;lt; 128KB file - not charged but eligible only for frequent access tier. No retrieval fees. No lifecycle fees. But monthly object monitoring &amp;amp; automation fees applies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 Glacier Instant Retrieval (Archive tier):
&lt;/h3&gt;

&lt;p&gt;Data can be accessed in milliseconds.&lt;br&gt;
Supports Resilient/Availability as data persisted in multiple AZs.&lt;br&gt;
Minimum data needs to be stored for 90 days. If an object deleted before 90 days, pro-rated charge for remaining days applies. Min 128 KB size. If uploading &amp;lt; 128KB file, still charged for 128KB&lt;br&gt;
Charged for data retrieval.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 Glacier Flexible Retrieval (Archive tier):
&lt;/h3&gt;

&lt;p&gt;Expedited, then data can be retrieved in 1-5 mins.&lt;br&gt;
Standard, then data can be retrieved in 3-5 hours.&lt;br&gt;
Bulk, then data can be retrieved in 5-12 hours for free.&lt;br&gt;
Supports Resilient/Availability as data persisted in multiple AZs.&lt;br&gt;
Minimum data needs to be stored for 90 days. If an object deleted before 90 days, pro-rated charge for remaining days applies.&lt;br&gt;
Minimum object 40KB size expected. If uploading &amp;lt; 40KB file, still charged for 40KB.&lt;br&gt;
Charged for data retrieval.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 Glacier Deep Archive (Deep Archive tier):
&lt;/h3&gt;

&lt;p&gt;Default retrieval time of 12 hrs. &lt;br&gt;
Supports Resilient/Availability as data persisted in multiple AZs.&lt;br&gt;
Minimum data needs to be stored for 180 days. If an object deleted before 180 days, pro-rated charge for remaining days applies.&lt;br&gt;
Minimum object 40KB size expected. If uploading &amp;lt; 40KB file, still charged for 40KB.&lt;/p&gt;

&lt;p&gt;Image by &lt;a href="https://pixabay.com/users/wineguide101-8979445/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=3551301"&gt;wineguide101&lt;/a&gt; from &lt;a href="https://pixabay.com//?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=3551301"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>s3storageclasses</category>
      <category>frequent</category>
      <category>infrequent</category>
      <category>archivetier</category>
    </item>
    <item>
      <title>Amazon S3 Encryption</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Tue, 01 Nov 2022 14:02:14 +0000</pubDate>
      <link>https://dev.to/prabusah_53/amazon-s3-encryption-5fop</link>
      <guid>https://dev.to/prabusah_53/amazon-s3-encryption-5fop</guid>
      <description>&lt;h3&gt;
  
  
  Encryption:
&lt;/h3&gt;

&lt;p&gt;While in-transit .ie. data traveling to and from Amazon S3-use&lt;br&gt;
SSL/TLS and while at rest .ie. data stored on disks in Amazon S3 data centers - use SSE, CSE.&lt;/p&gt;

&lt;h3&gt;
  
  
  Server Side Encryption options (SSE):
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;SSE-S3: AWS S3 Managed keys. Each object encrypted with unique key. Cost effective.&lt;/li&gt;
&lt;li&gt;SSE-KMS: Customer Master Keys stored in AWS Key Management Service.&lt;/li&gt;
&lt;li&gt;SSE-C: Customer-Provided keys&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Client Side Encryption (CSE):
&lt;/h3&gt;

&lt;p&gt;Encrypting data before sending it to Amazon S3.&lt;/p&gt;

&lt;p&gt;Image by &lt;a href="https://pixabay.com/users/markusspiske-670330/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=3655668"&gt;Markus Spiske&lt;/a&gt; from &lt;a href="https://pixabay.com//?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=3655668"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>s3encryption</category>
      <category>sses3</category>
      <category>ssekms</category>
      <category>ssec</category>
    </item>
    <item>
      <title>Amazon S3 Security</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Tue, 01 Nov 2022 14:00:12 +0000</pubDate>
      <link>https://dev.to/prabusah_53/amazon-s3-security-19b5</link>
      <guid>https://dev.to/prabusah_53/amazon-s3-security-19b5</guid>
      <description>&lt;p&gt;Amazon S3 Bucket accessible only to user who created or account owner. How to grant access to other users?. Follow any one the methods below: &lt;/p&gt;

&lt;h3&gt;
  
  
  IAM:
&lt;/h3&gt;

&lt;p&gt;Create user and manage access to buckets/objects.&lt;br&gt;
Contains permission for other than S3 as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bucket policy (resource policy):
&lt;/h3&gt;

&lt;p&gt;Using tags/prefixes configure permissions to all / set of objects. This must have principal. &lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-signed URLs:
&lt;/h3&gt;

&lt;p&gt;Grant time-limited access with temp URLs&lt;/p&gt;

&lt;h3&gt;
  
  
  ACL (resource policy):
&lt;/h3&gt;

&lt;p&gt;This makes individual object accessible to users. This is Legacy, use bucket policies/IAM policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Block Public Access:
&lt;/h3&gt;

&lt;p&gt;By default, any bucket created has "block all" public access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3 Object Ownership
&lt;/h3&gt;

&lt;p&gt;Object usually owned by the account or user that uploaded it ("Bucket owner"). If other AWS account uploads an object then only that account is the owner. To overcome this, use "Amazon S3 Object Ownership" option.("Bucket owner preferred").&lt;/p&gt;

&lt;p&gt;Image by &lt;a href="https://pixabay.com/users/jarmoluk-143740/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=428549"&gt;Michal Jarmoluk&lt;/a&gt; from &lt;a href="https://pixabay.com//?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=428549"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>s3security</category>
      <category>resourcepolicy</category>
      <category>presignedurl</category>
      <category>blockallpublicaccess</category>
    </item>
    <item>
      <title>Amazon S3 Primer</title>
      <dc:creator>Prabusah</dc:creator>
      <pubDate>Tue, 01 Nov 2022 12:12:30 +0000</pubDate>
      <link>https://dev.to/prabusah_53/amazon-s3-primer-1ad0</link>
      <guid>https://dev.to/prabusah_53/amazon-s3-primer-1ad0</guid>
      <description>&lt;p&gt;Amazon S3 uses object storage type. Object data usually stored in bucket. Prefixes (pseudo folder structure) are used to group objects like folder structure in the user interface (AWS Console) but in reality, the object storage type is still a flat structure.&lt;/p&gt;

&lt;p&gt;By default, we can create up to 100 buckets in a AWS account (by submitting service ticket this can be increased up to 1000 buckets). Bucket sizes are unlimited so user do not have to allocate or predetermine bucket size.&lt;/p&gt;

&lt;h3&gt;
  
  
  Few Bucket Info:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cannot be transferred to other accounts.&lt;/li&gt;
&lt;li&gt;Bucket names are globally unique within entire AWS S3 infrastructure. Once deleted from an AWS account, the name becomes available for reuse by any AWS account after 24 hours.&lt;/li&gt;
&lt;li&gt;Bucket names cannot be renamed.&lt;/li&gt;
&lt;li&gt;Bucket cannot be nested.&lt;/li&gt;
&lt;li&gt;Bucket names can be 3-63 characters long.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Terminologies:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Prefixes - pseudo folder.&lt;/li&gt;
&lt;li&gt;Key - name of the object.&lt;/li&gt;
&lt;li&gt;Object - A file contains data, metadata (optional) and permissions. All 3 usually provided while uploading a file to bucket. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Bucket can have up to 50 tags and an Object can have up to 10 tags.&lt;br&gt;
Any number of objects can be stored in a bucket.&lt;br&gt;
Each object at max can hold 5 TB data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Region:
&lt;/h3&gt;

&lt;p&gt;In AWS Console - S3 is a globally &lt;em&gt;viewable&lt;/em&gt; service. Bucket creation requires region that decides where the data resides.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross Region Replication:
&lt;/h3&gt;

&lt;p&gt;Cross Region replication replicates bucket to other region. Entire bucket or use tags to replicate only the objects with the tags we choose. &lt;/p&gt;

&lt;h3&gt;
  
  
  Same Region Replication:
&lt;/h3&gt;

&lt;p&gt;Source and Target buckets reside in same region.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strong data consistency:
&lt;/h3&gt;

&lt;p&gt;Strong read-after-write consistency. After list or write of new object or overwrite of existing object, any subsequent read request receives latest version of the object.&lt;/p&gt;

&lt;h3&gt;
  
  
  Versioning:
&lt;/h3&gt;

&lt;p&gt;Enables recovery of objects from accidental deletion or overwrite.&lt;/p&gt;

&lt;h3&gt;
  
  
  GET operation:
&lt;/h3&gt;

&lt;p&gt;Retrieve object. To retrieve a part of object, use the Range HTTP header in GET request.&lt;/p&gt;

&lt;h3&gt;
  
  
  DELETE operation:
&lt;/h3&gt;

&lt;p&gt;No versioning then permanently deleted.&lt;br&gt;
if version enabled then either permanently delete (Key + Version ID) or create delete marker for the object which can be recovered later (only Key name used without VersionID).&lt;br&gt;
Recovery by removing the delete marker.&lt;br&gt;
Retrieving an object that has delete marker returns 404 NOT FOUND.&lt;/p&gt;

&lt;h3&gt;
  
  
  PUT operation:
&lt;/h3&gt;

&lt;p&gt;Adds an object to bucket. No partial write. Always completely writes the entire object.&lt;br&gt;
In a single PUT operation - Upload up to 5 GB. (Max object size 5 TB). For &amp;gt;5GB size, then use multipart upload API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multipart upload API:
&lt;/h3&gt;

&lt;p&gt;Uploads up to 5 TB data part by part up to 5 GB.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Best practice: To upload more than 100MB size of object, then use multipart upload.&lt;br&gt;
S3 retains all parts on server until complete multipart upload is complete or discontinued. So if upload is incomplete then storage costs occurs for parts of data stored in S3. Use lifecycle rules to clean up incomplete multipart uploads automatically.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Online Data Transfer services:
&lt;/h3&gt;

&lt;p&gt;AWS Data Sync, AWS Transfer family, Amazon Kinesis Firehose (direct to S3), Amazon Kinesis Data Streams (process streaming data).&lt;/p&gt;

&lt;h3&gt;
  
  
  Offline Data Transfer services:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS Snowcone&lt;/strong&gt; up to 8 TB space. &lt;br&gt;
&lt;strong&gt;AWS Snowball&lt;/strong&gt; Storage optimized 40vCPUs; Compute optimized 52 VCPUs may be rack mounted to build larger installations. &lt;br&gt;
&lt;strong&gt;AWS Snowmobile&lt;/strong&gt; up to 100 PB. 45 foot long container with semi trailer-security personnel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hybrid cloud storage services:
&lt;/h3&gt;

&lt;p&gt;On premise applications needs rapid data transfer/access to cloud.&lt;br&gt;
&lt;strong&gt;AWS Direct Connect&lt;/strong&gt; Dedicated network connection (without passing through internet) between on-premise to AWS. Uses VLAN. &lt;br&gt;
&lt;strong&gt;AWS Storage Gateway&lt;/strong&gt; NFS/SMB protocol connect to S3 bucket.&lt;/p&gt;

&lt;p&gt;Image by &lt;a href="https://pixabay.com/users/alexas_fotos-686414/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=1655571"&gt;Alexa&lt;/a&gt; from &lt;a href="https://pixabay.com//?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=1655571"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>cloudskills</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
