<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Teresa N. Fontanella De Santis</title>
    <description>The latest articles on DEV Community by Teresa N. Fontanella De Santis (@teresafds).</description>
    <link>https://dev.to/teresafds</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/teresafds"/>
    <language>en</language>
    <item>
      <title>Authorization on FastAPI with Casbin</title>
      <dc:creator>Teresa N. Fontanella De Santis</dc:creator>
      <pubDate>Sat, 23 Apr 2022 01:47:38 +0000</pubDate>
      <link>https://dev.to/teresafds/authorization-on-fastapi-with-casbin-41og</link>
      <guid>https://dev.to/teresafds/authorization-on-fastapi-with-casbin-41og</guid>
      <description>&lt;p&gt;Nowadays, tons of APIs (both external and internal) are created and used every day. With methods like authentication/firewall restriction, we can identify who can invoke the methods, or restrict from where is trying to access. But, can we identify and authorize given users to invoke some methods rather than others? In the following tutorial we will cover how to authorize different users to execute certain REST API methods in an easy and straightforward way. &lt;/p&gt;

&lt;h2&gt;
  
  
  Situation
&lt;/h2&gt;

&lt;p&gt;In this case, we have an Items REST API implemented on Python 3.10 with &lt;a href="https://fastapi.tiangolo.com/" rel="noopener noreferrer"&gt;FastAPI&lt;/a&gt; framework. It allows to list all items and get, create and delete an item. All of these operations must be performed by authenticated users. For sake of simplicity, the following users can be used for authentication:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;User&lt;/th&gt;
&lt;th&gt;Password&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;alice&lt;/td&gt;
&lt;td&gt;secret2&lt;/td&gt;
&lt;td&gt;Admin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;johndoe&lt;/td&gt;
&lt;td&gt;secret&lt;/td&gt;
&lt;td&gt;User&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The application consists of the files: main.py, utils.py and requirements.txt, with the following code. &lt;br&gt;
&lt;strong&gt;main.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;utils.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;requirements.txt&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;You can create a conda environment, install required packages and run the api with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

conda create -n itemsapi pip
conda activate itemsapi
pip install -r requirements.txt
python3 -m uvicorn main:app --reload 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;After the API is up and running, let's follow these steps to test it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;a href="http://127.0.0.0:8000/docs" rel="noopener noreferrer"&gt;http://127.0.0.0:8000/docs&lt;/a&gt; url in browser.&lt;/li&gt;
&lt;li&gt;Click on "Authorize" and login with username and password (as per Users table shown before).&lt;/li&gt;
&lt;li&gt;To get all items list, select on /items GET API method. Then, click on "Try out" button and on "Execute" button.&lt;/li&gt;
&lt;li&gt;To delete the item with id = 1, select on /item DELETE API method, click on "Try out" and execute the method with item_id = 1. The response is 204 and item is deleted successfully.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjax9kur88opeg9uajy9h.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjax9kur88opeg9uajy9h.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Goal
&lt;/h2&gt;

&lt;p&gt;Although only registered users (johndoe and alice in this case) can perform items actions, all of them are able to delete items. As per their roles, alice should be able to delete items, but johndoe shouldn't. To achieve this we will implement authorization at REST API method level, in an easy and extensible way with Casbin.&lt;/p&gt;
&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://casbin.org/" rel="noopener noreferrer"&gt;Casbin&lt;/a&gt; is an open source authorization library with support for many models (like Access Control Lists or ACLs, Role Based Access Control or RBAC, Restful, etc) and with implementations on several programming languages (ie: Python, Go, Java, Rust, Ruby, etc). &lt;br&gt;
It consists of two configuration files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;model file&lt;/strong&gt;: a CONF file (with &lt;code&gt;.conf&lt;/code&gt; extension) which specifies the authorization model to be applied (in this case we will use Restful one)&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;policy file&lt;/strong&gt;: a CSV file that list API methods permissions for each user. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Steps
&lt;/h3&gt;

&lt;p&gt;1) Install casbin python package with pip&lt;br&gt;
&lt;code&gt;pip install casbin&lt;/code&gt;&lt;br&gt;
2) Define Conf policy&lt;br&gt;
Create new file called &lt;strong&gt;model.conf&lt;/strong&gt; with the following content: &lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;You can find more details about Casbin model syntax on the &lt;a href="https://casbin.org/docs/en/syntax-for-models" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;&lt;br&gt;&lt;br&gt;
3) Define Policy file&lt;br&gt;&lt;br&gt;
Create a new CSV file called &lt;strong&gt;policy.csv&lt;/strong&gt; and paste the following: &lt;br&gt;&lt;/p&gt;

&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;Each row is an allowed permissions with the following values: the second column is the user, the third is the API resource or url, and the last one is a set of allowed methods. In this case, alice will have access to list create and delete items, while johndoe may list and create items but not delete them.&lt;/p&gt;

&lt;p&gt;4) Update Python API code to enforce authorization.&lt;br&gt;
In the &lt;strong&gt;main.py&lt;/strong&gt; file, with following lines:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import casbin
...
async def get_current_user_authorization(req: Request, curr_user: User = Depends(get_current_active_user)):
    e = casbin.Enforcer("model.conf", "policy.csv")
    sub = curr_user.username
    obj = req.url.path
    act = req.method
    if not(e.enforce(sub, obj, act)):
        raise HTTPException(
            status_code=status.HTTP_401_UNAUTHORIZED,
            detail="Method not authorized for this user")
    return curr_user


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It imports the casbin module and create a new authorization function that read the configuration files with &lt;code&gt;casbin.Enforcer&lt;/code&gt; and enforce the user has the required permission.&lt;br&gt;&lt;br&gt;
Then, on the defined API methods, change the old method &lt;code&gt;get_current_active_user&lt;/code&gt; with the new &lt;code&gt;get_current_user_authorization&lt;/code&gt;&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;@app.get("/items/{item_id}")&lt;br&gt;
async def read_item(item_id: int, req: Request, curr_user: User = Depends(get_current_user_authorization)):&lt;br&gt;
    return items_dao.get_item(item_id)&lt;/p&gt;

&lt;p&gt;@app.post("/items/")&lt;br&gt;
async def create_item(item: Item, req: Request, curr_user: User = Depends(get_current_user_authorization)):&lt;br&gt;
    answer = items_dao.create_item(item)&lt;br&gt;
    if not(answer):&lt;br&gt;
        raise HTTPException(&lt;br&gt;
            status_code=status.HTTP_400_BAD_REQUEST,&lt;br&gt;
            detail="Item with given id already exists")&lt;br&gt;
    else:&lt;br&gt;
        return answer&lt;/p&gt;

&lt;p&gt;@app.delete("/items/{item_id}", status_code=status.HTTP_204_NO_CONTENT)&lt;br&gt;
async def delete_item(item_id: int, req: Request, curr_user: User = Depends(get_current_user_authorization)):&lt;br&gt;
    items_dao.delete_item(item_id)&lt;br&gt;
    return Response(status_code=status.HTTP_204_NO_CONTENT)&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Test&lt;br&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Start the updated API &lt;/li&gt;
&lt;li&gt;Open &lt;a href="http://127.0.0.0:8000/docs" rel="noopener noreferrer"&gt;http://127.0.0.0:8000/docs&lt;/a&gt; url in browser.&lt;/li&gt;
&lt;li&gt;Click on "Authorize" and login with "johndoe".&lt;/li&gt;
&lt;li&gt;Try to delete item with id=1. It will be rejected with a 401 Unauthorized error.&lt;/li&gt;
&lt;li&gt;Logout from that user. Then login with "alice".&lt;/li&gt;
&lt;li&gt;Try to delete item with id=1. The request works fine, returns 204 and item is deleted.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6o6u97eekkkt21unp31.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6o6u97eekkkt21unp31.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;On this post we saw how to use Casbin to implement authorization on REST APIs. Keep in consideration that this example can be extended combining with other authorization models like RBAC, and only changing the model and policy configuration files.&lt;/p&gt;

</description>
      <category>casbin</category>
      <category>fastapi</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Preventing higher costs on EC2</title>
      <dc:creator>Teresa N. Fontanella De Santis</dc:creator>
      <pubDate>Tue, 15 Mar 2022 02:43:08 +0000</pubDate>
      <link>https://dev.to/teresafds/preventing-higher-costs-on-ec2-3p2a</link>
      <guid>https://dev.to/teresafds/preventing-higher-costs-on-ec2-3p2a</guid>
      <description>&lt;p&gt;In this post we will discuss an example of how to prevent running unwanted expensive EC2 instances. &lt;/p&gt;

&lt;h3&gt;
  
  
  Situation
&lt;/h3&gt;

&lt;p&gt;We currently have several AWS accounts with consolidated billing in one AWS account using &lt;a href="https://aws.amazon.com/organizations/"&gt;AWS Organizations&lt;/a&gt;. Hundreds of users are using those AWS Accounts daily to complete their daily work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue
&lt;/h3&gt;

&lt;p&gt;We want to prevent all users to launch expensive EC2 instances with higher prices (like the GPU Accelerated instances types), to do not have a heart attack when looking at the bill in the following month. We can use IAM custom policies to define those conditions, but the restriction must be applied also in those who had full administration access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution
&lt;/h3&gt;

&lt;p&gt;We can take advantage of AWS Organizations and create Service Control Policies (or SCPs) to define IAM guardrails. These policies will help us to define a set of Denied actions on an AWS Account, no matter IAM user's permissions. Additionally, this won’t required any additional cost. The following steps can be run on a CloudShell terminal. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Enable organization Secure Control Policies (SCPs).&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rootId=$(aws organizations list-roots | jq .Roots[].Id | sed 's/"//g')
aws organizations enable-policy-type --root-id  $rootId  --policy-type SERVICE_CONTROL_POLICY`
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create SCP to restrict expensive instance types.&lt;/strong&gt;&lt;br&gt;
Save a SCP policy in a json file and create the policy. Then, obtain the policyId to be used in next step.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;gt; scp_ec2_policy.json &amp;lt;&amp;lt;'END'
{
"Version": "2012-10-17",
"Statement": [
    {
        "Sid": "VisualEditor0",
        "Effect": "Deny",
        "Action": "ec2:RunInstances",
        "Resource": "*",
        "Condition": {
            "StringLike": {
                "ec2:InstanceType": "p*.*"
            }
        }
    }
]
}
END
policyName="DenyEC2InstanceTypes"
description="Denies launch of expensive  EC2 instances"
file="file://scp_ec2_policy.json"
policy=$(aws organizations create-policy --content $file --name $policyName --type SERVICE_CONTROL_POLICY --description "$description")
policyId=$(echo $policy  | jq ".Policy.PolicySummary.Id"  | sed 's/"//g')
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Attach it on AWS Account on AWS Organizations.&lt;/strong&gt; &lt;br&gt;
Apply the following command per each AWS Account invited into the AWS Organization, except the management account. Replace &lt;code&gt;&amp;lt;aws_account_id&amp;gt;&lt;/code&gt; by AWS Account's ID.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws organizations attach-policy --policy-id $policyId --target-id &amp;lt;aws_account_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Finally, when trying to launch a new EC2 instance with p3 instance type, we'll get an error no matter we have EC2 Full Access. This is great to prevent unexpected costs for launching expensive EC2 instances!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>organizations</category>
      <category>ec2</category>
    </item>
    <item>
      <title>Accessing S3 Buckets from CloudShell</title>
      <dc:creator>Teresa N. Fontanella De Santis</dc:creator>
      <pubDate>Sun, 26 Dec 2021 23:24:09 +0000</pubDate>
      <link>https://dev.to/teresafds/accessing-s3-buckets-from-cloudshell-4ojh</link>
      <guid>https://dev.to/teresafds/accessing-s3-buckets-from-cloudshell-4ojh</guid>
      <description>&lt;p&gt;After one year since &lt;a href="https://aws.amazon.com/about-aws/whats-new/2020/12/introducing-aws-cloudshell/" rel="noopener noreferrer"&gt;AWS CloudShell was released&lt;/a&gt;, it worth to comment about a connection issue between this technology and S3 Buckets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue
&lt;/h3&gt;

&lt;p&gt;We have a S3 bucket (in this case, named &lt;code&gt;mytestbucket0123&lt;/code&gt;) that we need to access through AWS CloudShell. &lt;br&gt;
But when trying to list all objects on a bucket from CloudShell, executing &lt;code&gt;aws s3 ls s3://mytestbucket0123&lt;/code&gt; we’re getting the following error &lt;/p&gt;

&lt;p&gt;"An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4sydzmpnrx8ei9ckyci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4sydzmpnrx8ei9ckyci.png" alt="AWS CloudShell error when listing s3 objects"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification steps
&lt;/h3&gt;

&lt;p&gt;As a starting point of this situation, we’ll need to analyze the scene and do the following verification questions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Has the user got the required permissions to list objects of our S3 Bucket?&lt;/strong&gt;  To answer this we have several ways: first check on IAM that the user has assigned those permissions. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnipykprbokmwskp95i81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnipykprbokmwskp95i81.png" alt="User on IAM with required S3 Permissions"&gt;&lt;/a&gt;&lt;br&gt;
The user has attached the &lt;code&gt;AmazonS3ReadOnlyAccess&lt;/code&gt; Policy, so it has  ListObjects required permission. So let’s verify that the user can already list the s3 bucket objects (from the AWS console for example). Listing objects on the bucket seems to work fine from the AWS Console with the same user (from server machine) as per screenshot below.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkuudfzkwwl5hon3n6fi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkuudfzkwwl5hon3n6fi.png" alt="User can list S3 Objects from AWS Console correctly"&gt;&lt;/a&gt;&lt;br&gt;
So, it doesn’t seem to be user IAM permissions at least. Which leads us to the following step.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Has the S3 Bucket got any Bucket policy enabled?&lt;/strong&gt; We need to search, on the AWS console, the S3 bucket and look the “Bucket policy” section in the “Permissions” tab. &lt;br&gt;
In this case, the S3 Bucket has the following Bucket Policy &lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::mytestbucket0123",
            "Condition": {
                "NotIpAddress": {
                    "aws:SourceIp": "&amp;lt;Public IP Address&amp;gt;"
                },
                "StringEqualsIfExists": {
                    "aws:SourceVpc": "&amp;lt;VPC-Id&amp;gt;"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    This bucket policy denies access to all users (no matter they have the required IAM permissions), except they access from a specific IP Address or connect from our VPC (which, in this case is the AWS Account’s default VPC). That means the CloudShell is not accessing to the S3 Bucket from the VPC… So let’s ask the next question.

3. **Does CloudShell terminal connect to the S3 Bucket through a public IP Address?** To answer this, we need to execute the following command into the CloudShell terminal: `curl ifconfig.me`![The result of executing  curl ifconfig.me is 18.224.171.123, which means Public IP Address](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ycs8gjesegoplzoizvlg.png) It seems that CloudShell is trying to access to the S3 Bucket from a non whitelisted Public Address (which belongs to AWS Reserved IP Addresses).

### Cause explanation
After the verification steps, we can realize this is a CloudShell’s current limitation: it cannot connect to AWS Resources using VPC Endpoints or with VPC restricted access. Besides, the IP Address allocated per each CloudShell terminal may change over the time. 

### Solution 
To solve this issue we need to whitelist the Cloudshell’s user agent. Although this may not be a 100% secure resolution, it can be a temporary fix until AWS CloudShell can improve (or no CloudShell access is required). At the time of this writing, the user agent is "[aws-cli/2.4.5 Python/3.8.8 Linux/4.14.248-189.473.amzn2.x86_64 exec-env/CloudShell exe/x86_64.amzn.2 prompt/off command/s3.ls]”. Then edit the S3 Bucket Policy to the following (excluding also the CloudShell user agent). 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;{&lt;br&gt;
    "Version": "2012-10-17",&lt;br&gt;
    "Statement": [&lt;br&gt;
        {&lt;br&gt;
            "Sid": "Statement1",&lt;br&gt;
            "Effect": "Deny",&lt;br&gt;
            "Principal": "&lt;em&gt;",&lt;br&gt;
            "Action": "s3:&lt;/em&gt;",&lt;br&gt;
            "Resource": "arn:aws:s3:::mytestbucket0123",&lt;br&gt;
            "Condition": {&lt;br&gt;
                "StringEquals": {&lt;br&gt;
                    "aws:UserAgent": "[aws-cli/2.4.5 Python/3.8.8 Linux/4.14.248-189.473.amzn2.x86_64 exec-env/CloudShell exe/x86_64.amzn.2 prompt/off command/s3.ls]"&lt;br&gt;
                },&lt;br&gt;
                "NotIpAddress": {&lt;br&gt;
                    "aws:SourceIp": ""&lt;br&gt;
                },&lt;br&gt;
                "StringEqualsIfExists": {&lt;br&gt;
                    "aws:SourceVpc": ""&lt;br&gt;
                }&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
    ]&lt;br&gt;
}&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

Finally, when trying to perform on the terminal the command  `aws s3 ls s3://mytestbucket0123`  finally works!

![aws s3 ls working correctly from CloudShell after S3 Bucket Policy update](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1eylou7y1pq6kr50ckd7.png)



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>cloudshell</category>
      <category>aws</category>
      <category>s3</category>
      <category>troubleshooting</category>
    </item>
    <item>
      <title>Working on AWS console during outage</title>
      <dc:creator>Teresa N. Fontanella De Santis</dc:creator>
      <pubDate>Wed, 08 Dec 2021 20:35:01 +0000</pubDate>
      <link>https://dev.to/teresafds/working-on-aws-console-during-outage-446g</link>
      <guid>https://dev.to/teresafds/working-on-aws-console-during-outage-446g</guid>
      <description>&lt;h3&gt;
  
  
  Situation
&lt;/h3&gt;

&lt;p&gt;As yesterday December 7th a new AWS outage made headlines, it seems opportune to talk about some AWS Services that are global (and not only tied to one region endpoint) like IAM and S3. For some Cloud Operations teams, to face an AWS outage in some regions is to have blockers on their daily jobs. The purpose of this post is to show how to continue doing admin jobs during a partial outage. &lt;/p&gt;

&lt;h3&gt;
  
  
  Issue
&lt;/h3&gt;

&lt;p&gt;AWS reported issues with their APIs on North Virginia (us-east-1) region. And let’s suppose we are on a Cloud Operations team and we need to perform some tasks in AWS Console, like for example, create a S3 Bucket and create a new user that have access to the given bucket (all in region US East 1)… How will be supposed to perform those actions with AWS issue without losing all day?&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Login into the &lt;a href="https://console.aws.amazon.com/"&gt;AWS Console&lt;/a&gt; with your user and password. If you are redirected by us-east-1 region, go to your browser url and change it with:&lt;br&gt;
&lt;code&gt;https://&amp;lt;region&amp;gt;.console.aws.amazon.com/&lt;/code&gt;&lt;br&gt;
where &lt;code&gt;&amp;lt;region&amp;gt;&lt;/code&gt; is a valid AWS region (for example, in this case we’ll use the Ohio &lt;code&gt;us-east-2&lt;/code&gt; region). &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3jmTN9Kn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl5nzxi4xoh15myxjv61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3jmTN9Kn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl5nzxi4xoh15myxjv61.png" alt="https://us-east-2.console.aws.amazon.com" width="809" height="82"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So now we are logged into the AWS console successfully!  We can then go to &lt;code&gt;Services&lt;/code&gt;-&amp;gt;&lt;code&gt;IAM&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mlRqPQMR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/me2gweeqn01g01mjmynb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mlRqPQMR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/me2gweeqn01g01mjmynb.png" alt="AWS Services and IAM menu" width="880" height="452"&gt;&lt;/a&gt;&lt;br&gt;
Clicking on &lt;code&gt;Users&lt;/code&gt; -&amp;gt; &lt;code&gt;Add users&lt;/code&gt; we can create a new user and assign the &lt;code&gt;AmazonS3FullAccess&lt;/code&gt; AWS Managed Policy (just an example, it can be any other permission).&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S_6ntGW---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fryl4wxxf3ao9n64a1xw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S_6ntGW---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fryl4wxxf3ao9n64a1xw.png" alt="On IAM menu go to Users and click on create User" width="880" height="271"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Qq620v6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eis4zxu5st1rxt3l4753.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Qq620v6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eis4zxu5st1rxt3l4753.png" alt="New AWS User with S3Full Access permissions" width="880" height="497"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After creating our user, let’s create a new S3 Bucket  from where we can read/write data (keeping logged in a different region). So we can go to &lt;code&gt;Services&lt;/code&gt;-&amp;gt; &lt;code&gt;S3&lt;/code&gt;. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--45x5drSh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3e3rpoi31c2855usqk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--45x5drSh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3e3rpoi31c2855usqk2.png" alt="AWS Services and S3 menu" width="880" height="494"&gt;&lt;/a&gt;&lt;br&gt;
Then we create a new bucket. Provide a unique name for that S3 Bucket and then on the region select the &lt;code&gt;US East 1&lt;/code&gt;. Then, go to the end of the page and click on &lt;code&gt;Create&lt;/code&gt;. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PLElUN00--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8v51140jv26m19gqnia5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PLElUN00--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8v51140jv26m19gqnia5.png" alt="Create S3 Bucket with us-east-1 region" width="880" height="645"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Voilà! We did our task but located on a different region! Although it is better to have AWS working 100% on all the regions, and that this approach cannot be applied on several services. This can be a temporal workaround to avoid getting stuck when performing some Cloud Operations tasks.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>outage</category>
    </item>
    <item>
      <title>Curl issue: SSL certificate problem: certificate has expired</title>
      <dc:creator>Teresa N. Fontanella De Santis</dc:creator>
      <pubDate>Sun, 17 Oct 2021 23:58:54 +0000</pubDate>
      <link>https://dev.to/teresafds/curl-issue-ssl-certificate-problem-certificate-has-expired-58g4</link>
      <guid>https://dev.to/teresafds/curl-issue-ssl-certificate-problem-certificate-has-expired-58g4</guid>
      <description>&lt;p&gt;In the following article we'll cover a common certificate issue faced with cURL application. &lt;code&gt;curl&lt;/code&gt; is a command line client URL, which provides us the response of a given request for any HTTP(S) method. After this introduction, let's go deep into our issue... &lt;/p&gt;

&lt;h4&gt;
  
  
  Issue
&lt;/h4&gt;

&lt;p&gt;When trying to execute a curl command to a specific site, like &lt;code&gt;curl https://airlabs.co/api/v9/ping.json&lt;/code&gt; it is giving the following error: &lt;/p&gt;

&lt;p&gt;“curl: (60) SSL certificate problem: certificate has expired&lt;br&gt;
More details here: &lt;a href="https://curl.haxx.se/docs/sslcerts.html"&gt;https://curl.haxx.se/docs/sslcerts.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.”&lt;/p&gt;

&lt;p&gt;The same url is working fine on any browser, and we have the &lt;code&gt;openssl&lt;/code&gt; library installed on our server,&lt;/p&gt;

&lt;h4&gt;
  
  
  Root cause explanation
&lt;/h4&gt;

&lt;p&gt;CURL certificate stored on the server has expired. So we need to obtain the updated certificate for the site  and replace it in the certificates’ system folder.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resolution steps
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;First make sure you have wget installed on your server. &lt;br&gt;
You can install it on Mac using &lt;code&gt;brew install wget&lt;/code&gt;. &lt;br&gt;
For Ubuntu, you can use &lt;code&gt;apt install wget&lt;/code&gt;. &lt;br&gt;
For CentOS/RHEL, you can use &lt;code&gt;yum install wget&lt;/code&gt;.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download the updated curl’s SSL certificate (from site curl.se), doing: &lt;code&gt;wget https://curl.se/ca/cacert.pem&lt;/code&gt;&lt;br&gt;
The certificate will be downloaded as cacert.pem file. Then, you can execute the curl command with the flag &lt;code&gt;--cacert &amp;lt;path_to_cacert.pem_file&amp;gt;&lt;/code&gt;. &lt;br&gt;
For example: &lt;code&gt;curl --cacert ./cacert.pem https://airlabs.co/api/v9/ping.json&lt;/code&gt;&lt;br&gt;
If the certificate file is a valid one, the error should have disappeared. As we don’t want to add the  &lt;code&gt;--cacert&lt;/code&gt; flag for every curl command, we’ll go to the next step. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Replace the updated certificate on the certificates’ system folder. To get the folder path, execute the openssl version -a on your terminal. You’ll see something similar to this (it may vary according to the OS configuration).&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xhaqa7fE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vf0xjqrow5jlo9akbfai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xhaqa7fE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vf0xjqrow5jlo9akbfai.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
The &lt;em&gt;OPENSSLDIR&lt;/em&gt; folder is the folder where the certificates are stored by default; so copy it to the clipboard.&lt;br&gt;&lt;br&gt;
Then, copy (or move) the certificate into that folder. In our example, it can be:&lt;br&gt;
&lt;code&gt;cp cacert.pem &amp;lt;OPENSSL_DIR&amp;gt;&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After that, if we execute our curl command again, it will work as expected!&lt;/p&gt;

</description>
      <category>curl</category>
      <category>certificate</category>
    </item>
  </channel>
</rss>
