<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abdullah Paracha</title>
    <description>The latest articles on DEV Community by Abdullah Paracha (@abdullahparacha).</description>
    <link>https://dev.to/abdullahparacha</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abdullahparacha"/>
    <language>en</language>
    <item>
      <title>Building a Scalable Data Pipeline on AWS</title>
      <dc:creator>Abdullah Paracha</dc:creator>
      <pubDate>Sat, 04 Jan 2025 17:43:00 +0000</pubDate>
      <link>https://dev.to/abdullahparacha/building-a-scalable-data-pipeline-on-aws-4ig4</link>
      <guid>https://dev.to/abdullahparacha/building-a-scalable-data-pipeline-on-aws-4ig4</guid>
      <description>&lt;p&gt;Building a scalable data pipeline using AWS services. This pipeline ingests data from an external source, processes it, and loads it into Amazon Redshift for analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Data Ingestion to S3&lt;/strong&gt;&lt;br&gt;
Use Python and the AWS SDK (boto3) to upload data to an S3 bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

# Initialize S3 client
s3 = boto3.client('s3')

# Define bucket name and file details
bucket_name = 'my-data-lake'
file_name = 'data.csv'
file_path = '/path/to/data.csv'

# Upload file to S3
s3.upload_file(file_path, bucket_name, file_name)
print(f"Uploaded {file_name} to {bucket_name}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Data Processing with AWS Glue&lt;/strong&gt;&lt;br&gt;
Create an AWS Glue job to transform raw data into a structured format.&lt;/p&gt;

&lt;p&gt;Here’s an example Glue script written in PySpark:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import sys
def main():
    from awsglue.transforms import *
    from awsglue.utils import getResolvedOptions
    from pyspark.context import SparkContext
    from awsglue.context import GlueContext
    from awsglue.job import Job

    args = getResolvedOptions(sys.argv, ['JOB_NAME'])
    sc = SparkContext()
    glueContext = GlueContext(sc)
    spark = glueContext.spark_session
    job = Job(glueContext)
    job.init(args['JOB_NAME'], args)

    # Load data from S3
    input_path = "s3://my-data-lake/data.csv"
    dynamic_frame = glueContext.create_dynamic_frame.from_options(
        connection_type="s3",
        connection_options={"paths": [input_path]},
        format="csv",
    )

    # Perform transformations
    transformed_frame = ApplyMapping.apply(
        frame=dynamic_frame,
        mappings=[("column1", "string", "col1", "string"),
                 ("column2", "int", "col2", "int")]
    )

    # Write transformed data back to S3
    output_path = "s3://my-data-lake/transformed/"
    glueContext.write_dynamic_frame.from_options(
        frame=transformed_frame,
        connection_type="s3",
        connection_options={"path": output_path},
        format="parquet"
    )

    job.commit()

if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Load Data into Amazon Redshift&lt;/strong&gt;&lt;br&gt;
Copy the transformed data from S3 into Amazon Redshift.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY my_table
FROM 's3://my-data-lake/transformed/'
IAM_ROLE 'arn:aws:iam::123456789012:role/MyRedshiftRole'
FORMAT AS PARQUET;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Real-Time Data Processing with Amazon Kinesis&lt;/strong&gt;&lt;br&gt;
Use Kinesis to ingest and process streaming data in real time. Below is an example of setting up a simple Python consumer for Kinesis Data Streams:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json

def process_record(record):
    data = json.loads(record['Data'])
    print("Processed Record:", data)

# Initialize Kinesis client
kinesis = boto3.client('kinesis')
stream_name = 'my-data-stream'

# Fetch records from the stream
response = kinesis.get_records(
    ShardIterator=kinesis.get_shard_iterator(
        StreamName=stream_name,
        ShardId='shardId-000000000000',
        ShardIteratorType='LATEST'
    )['ShardIterator'],
    Limit=10
)

# Process each record
for record in response['Records']:
    process_record(record)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Query Data Using Amazon Athena&lt;/strong&gt;&lt;br&gt;
For ad-hoc queries, you can use Athena to query the data directly from S3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT col1, col2
FROM "my_data_lake_database"."transformed_data"
WHERE col2 &amp;gt; 100;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6: Automating Workflows with AWS Data Pipeline&lt;/strong&gt;&lt;br&gt;
Use AWS Data Pipeline to schedule and automate tasks such as running an EMR job or triggering an S3-to-Redshift load.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "objects": [
    {
      "id": "Default",
      "name": "Default",
      "fields": []
    },
    {
      "id": "S3ToRedshiftCopyActivity",
      "type": "CopyActivity",
      "schedule": {
        "ref": "Default"
      },
      "input": {
        "ref": "MyS3DataNode"
      },
      "output": {
        "ref": "MyRedshiftTable"
      }
    },
    {
      "id": "MyS3DataNode",
      "type": "S3DataNode",
      "directoryPath": "s3://my-data-lake/transformed/"
    },
    {
      "id": "MyRedshiftTable",
      "type": "RedshiftDataNode",
      "tableName": "my_table"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
AWS provides an extensive ecosystem of services that make building data pipelines efficient and scalable. Whether you’re dealing with batch or real-time processing, the combination of S3, Glue, Redshift, Kinesis, Athena, EMR, and Data Pipeline enables you to design robust solutions tailored to your needs. By integrating these services, data engineers can focus on extracting insights and adding value rather than managing infrastructure.&lt;/p&gt;

&lt;p&gt;Start building your AWS data pipelines today and unlock the full potential of your data!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsbigdata</category>
      <category>awsdataengieering</category>
      <category>awschallenge</category>
    </item>
    <item>
      <title>Building Interactive Applications with Amazon Bedrock, Amazon S3 and Streamlit</title>
      <dc:creator>Abdullah Paracha</dc:creator>
      <pubDate>Thu, 08 Aug 2024 12:34:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-interactive-applications-with-amazon-bedrock-amazon-s3-and-streamlit-52ec</link>
      <guid>https://dev.to/aws-builders/building-interactive-applications-with-amazon-bedrock-amazon-s3-and-streamlit-52ec</guid>
      <description>&lt;p&gt;👉🏻 This is a step-by-step guide on how to build an interactive web application with render interactive elements and integrate &lt;code&gt;Amazon S3&lt;/code&gt; and &lt;code&gt;Amazon Bedrock&lt;/code&gt; with the &lt;code&gt;Streamlit&lt;/code&gt; application. With custom-designed and interactive web UI, we are able to showcase a complete data exploration application along with generative AI capabilities. &lt;br&gt;
(&lt;em&gt;Note: to better focus on the deployment process, some pre-requisite steps such as Amazon EC2 configuration are not covered in the guide&lt;/em&gt;)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Case&lt;/li&gt;
&lt;li&gt;AWS Architecture&lt;/li&gt;
&lt;li&gt;Step-by-step guide on deployment 

&lt;ul&gt;
&lt;li&gt;Connect to virtual machine using EC2 instance connect&lt;/li&gt;
&lt;li&gt;Deploy the Streamlit Application to Amazon EC2&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Application Codes Tour

&lt;ul&gt;
&lt;li&gt;Build Streamlit Basic WebUI&lt;/li&gt;
&lt;li&gt;Build an interactive file upload webpage&lt;/li&gt;
&lt;li&gt;Build a generative AI image generator&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;Streamlit&lt;/code&gt; is an open-source Python library that makes it easy to create and share custom web apps. &lt;code&gt;Streamlit&lt;/code&gt; lets you transform Python scripts into interactive web apps in minutes, instead of weeks. &lt;/p&gt;

&lt;p&gt;There are a number of use cases by building with &lt;code&gt;Streamlit&lt;/code&gt;, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building dashboards and data apps&lt;/li&gt;
&lt;li&gt;generate reports from large documents&lt;/li&gt;
&lt;li&gt;or create generative AI chatbots.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the current spiking trends on &lt;code&gt;LLM&lt;/code&gt; applications, &lt;code&gt;Streamlit&lt;/code&gt; allows develop to deliver dynamic interactive apps with only a few lines of code.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Amazon Bedrock&lt;/code&gt; is a fully managed service that offers a choice of high-performing &lt;code&gt;foundation models&lt;/code&gt; (FMs) through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. [2]&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Amazon Bedrock&lt;/code&gt; takes advantages of the latest generative AI innovations with easy access to a choice of high-performing foundation models FMs from leading AI companies, such as Meta, Mistral AI, Stability AI, and Amazon. &lt;/p&gt;

&lt;p&gt;There are many different &lt;code&gt;foundational models&lt;/code&gt; available in &lt;code&gt;Amazon Bedrock&lt;/code&gt; including text, chat, and image. Model Evaluation on &lt;code&gt;Amazon Bedrock&lt;/code&gt; allows you to use automatic and human evaluations to select FMs for a specific use case. To tailor to your own needs, you can go from generic models to ones that are specialized and customized for your business and use case. [3]&lt;/p&gt;

&lt;p&gt;In this use case, we will utilize &lt;code&gt;Amazon Titan&lt;/code&gt; models to build generative AI applications as well as &lt;code&gt;Streamlit&lt;/code&gt;’s easy-to-deploy framework, we can easily develop our own data and AI products.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Use Case&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this use case, we are going to build an interactive web app which allows users to explore data, upload files and create generative AI photos. &lt;/p&gt;

&lt;p&gt;We will embed &lt;code&gt;Amazon Bedrock&lt;/code&gt; FMs to the application, deploy a &lt;code&gt;Streamlit&lt;/code&gt; application to an &lt;code&gt;Amazon EC2&lt;/code&gt; instance, and allow users to begin interacting with the application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. AWS Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the development process, we will: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy a &lt;code&gt;Streamlit&lt;/code&gt; application to &lt;code&gt;Amazon EC2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Render &lt;code&gt;Streamlit&lt;/code&gt; elements such as chatbot function in a web application&lt;/li&gt;
&lt;li&gt;Integrate &lt;code&gt;Amazon S3&lt;/code&gt; and &lt;code&gt;Amazon Bedrock&lt;/code&gt; with a &lt;code&gt;Streamlit&lt;/code&gt; application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5oehjofnhz6mop9eqmle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5oehjofnhz6mop9eqmle.png" alt="Image description" width="650" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Step-by-Step guide on deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are a number of pre-requisite steps before deploying the actual Streamlit applications. Before starting below steps, you should have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create an &lt;code&gt;Amazon EC2&lt;/code&gt; instance&lt;/li&gt;
&lt;li&gt;setup an &lt;code&gt;S3&lt;/code&gt; bucket to store uploaded files&lt;/li&gt;
&lt;li&gt;create a Github repository to hold &lt;code&gt;Streamlit&lt;/code&gt; python codes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.1 Connect to virtual machine using EC2 instance connect&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;EC2&lt;/code&gt;, right click the pre-configured EC2 name, connect to an EC2 instance using EC2 Instance Connect and access a shell. &lt;/p&gt;

&lt;p&gt;The following deployment process will be completed in the Shell environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03d83cyok6kggty9y87g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03d83cyok6kggty9y87g.png" alt="Image description" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.2 Deploy the Streamlit Application to Amazon EC2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Note: there will be detailed walkthrough of the application code in section 4.&lt;/p&gt;

&lt;p&gt;In the terminal, use the following set of commands to configure the AWS account credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure set aws_access_key_id &amp;lt;Your acess_key_id&amp;gt; &amp;amp;&amp;amp;
aws configure set aws_secret_access_key &amp;lt;Your acess_key&amp;gt; &amp;amp;&amp;amp;
aws configure set default.region &amp;lt;Your AWS region&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above command, provide your AWS account credentials to be configured.&lt;/p&gt;

&lt;p&gt;The following command will display the &lt;code&gt;Amazon S3&lt;/code&gt; bucket name required by the &lt;code&gt;Streamlit&lt;/code&gt; application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo $BUCKET_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this use case, the S3 bucket name is displayed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgm22scuxocdb9rcw0ps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgm22scuxocdb9rcw0ps.png" alt="Image description" width="800" height="62"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the following command to clone Github directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/&amp;lt;Your github directory&amp;gt;.git 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may setup different branches for your github repo to store application code. &lt;/p&gt;

&lt;p&gt;To deploy the application, enter the following set of commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd src/ &amp;amp;&amp;amp; pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2c0hkwjq1hb18rq2x5jd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2c0hkwjq1hb18rq2x5jd.png" alt="Image description" width="800" height="46"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command installs the required Python packages for the Streamlit application.&lt;/p&gt;

&lt;p&gt;To start the Streamlit application with webUI (python code walkthrough will be in the next section):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;streamlit run Basics.py&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fure39o243pa9unur9z0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fure39o243pa9unur9z0w.png" alt="Image description" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command starts the Streamlit application on the EC2 instance.&lt;br&gt;
The &lt;code&gt;Basics.py&lt;/code&gt; file is the main application file that you will run. Streamlit will utilize the &lt;code&gt;pages/&lt;/code&gt; directory to render the additional pages in a sidebar.&lt;br&gt;
Copy the external URL, and open a new tab to start the Streamlit application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Application Codes Tour&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You may store the Streamlit application code is a &lt;code&gt;src/&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;To run the application, first install the necessary libraries and  listed them in the &lt;code&gt;requirements.txt&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;streamlit
boto3
pandas
streamlit_pdf_viewer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;streamlit&lt;/code&gt; library will be referenced in each of the application files. This library contains the UI elements that will render the various widgets and fields used in the application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;boto3&lt;/code&gt; will be used to interact with the AWS services. To use this library, you must configure AWS credentials within the environment the Streamlit application is served.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;pandas&lt;/code&gt; library is used to present data within the application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;streamlit_pdf_viewer&lt;/code&gt; is a third-party, custom Streamlit component that allows you to control how PDF files are displayed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.1 Build Streamlit Basic WebUI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Refer to below Python code to create a simple basic WebUI that display&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import streamlit as st
import pandas as pd
import random

# Streamlit page configuration
st.set_page_config(layout="wide", page_title="Streamlit Basics")
st.title("Streamlit Basics")

# Streamlit container
with st.container(border=True):
    # Tabs
    text, data, chat, markdown = st.tabs(["Text", "Data", "Chat", "Markdown Editor"])

    # Tab content

    # Distplaying text
    with text:
        st.title("Titles")
        st.divider()
        st.header("Headers")
        st.subheader("Subheaders")
        st.text("Normal text")
        st.markdown("***Markdown Text***")
        st.code("for i in range(8): print(i)")


    # Displaying data as a data frame
    with data:
        st.subheader("Display data using a data frame")
        df = pd.DataFrame(
            {
                "name": ["Spirited Away", "Princess Mononoke", "My Neighbor Totoro"],
                "url": ["https://m.imdb.com/title/tt0245429/", "https://m.imdb.com/title/tt0119698/", "https://m.imdb.com/title/tt0096283/"],
                "reviews": [random.randint(0, 1000) for _ in range(3)],
                "views_history": [[random.randint(0, 5000) for _ in range(30)] for _ in range(3)],
            }
        )
        st.dataframe(
            df,
            column_config={
                "name": "Title",
                "reviews": st.column_config.NumberColumn(
                    "Reviews",
                    help="Total number of reviews",
                    format="%d ⭐",
                ),
                "url": st.column_config.LinkColumn("IMDb page"),
                "views_history": st.column_config.LineChartColumn(
                    "Views (past 30 days)", y_min=0, y_max=5000
                ),
            },
            hide_index=True,
        )

    # Chat input and variables
    with chat:
        st.subheader("Enter in a prompt to the chat")
        prompt = st.chat_input("Say something")
        if prompt:
            st.write(f"You entered the following prompt: :blue[{prompt}]")

    # Displaying Markdown and editor
    with markdown:
        st.subheader("Edit and render Markdown")
        md = st.text_area('Type in your markdown string (without outer quotes)')
        with st.container():
            st.divider()
            st.subheader("Rendered Markdown")
            st.markdown(md)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡 Containers (&lt;code&gt;st.container()&lt;/code&gt;) can be inserted into an app to provide structure for elements you choose to place inside.&lt;/p&gt;

&lt;p&gt;Tabs within the container are created using the &lt;code&gt;st.tabs()&lt;/code&gt; call. This method accepts a list of tab names as an argument and outputs separate tab objects.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;st.dataframe&lt;/code&gt; element accepts the &lt;code&gt;df&lt;/code&gt; object as an argument along with a &lt;code&gt;column_config&lt;/code&gt;. This configuration dictates how the data frame is displayed on the page.&lt;/p&gt;

&lt;p&gt;Strealit Basics Webpage:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazog4f03usenb69owlk9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazog4f03usenb69owlk9.png" alt="Image description" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Display data using a data frame:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9uz2jb4hirqf4q24krz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9uz2jb4hirqf4q24krz.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A simple chatbot that allows user to enter prompt:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvb37y8eepm4w1g1cnso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvb37y8eepm4w1g1cnso.png" alt="Image description" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This Streamlit Basics webpage displays simple data and text information, as well as a chatbot function. &lt;/p&gt;

&lt;p&gt;Further use cases could be extended to store data portfolios with an embedded chat features that allows users to ask questions about the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.2 Build an interactive file upload webpage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Refer to below Python code to create an interactive webpage for user to upload file to S3 bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import boto3
import streamlit as st
from streamlit_pdf_viewer import pdf_viewer
from io import BytesIO

# Amazon S3 client
s3 = boto3.client('s3')
bucket_name = os.environ['BUCKET_NAME']

st.set_page_config(layout="wide")

# Streamlit columns
upload_s3, read_s3 = st.columns(2)

# Column 1: Upload to Amazon S3 using Boto3
with upload_s3:
    st.subheader("Upload to Amazon S3")
    obj = st.file_uploader(label=f"Uploading to: :green[{bucket_name}]")
    if obj is not None:
        s3.upload_fileobj(obj, bucket_name, obj.name)

# Column 2: Read from Amazon S3 using Boto3
with read_s3:
    st.subheader("Read from Amazon S3")
    response = s3.list_objects_v2(Bucket=bucket_name)
    object_list = []

    if 'Contents' in response:
        for obj in response['Contents']:
            if not obj['Key'].endswith('/'):
                object_list.append(obj['Key'])
    else:
        st.write(f"S3 bucket is empty")

    selected_obj = st.selectbox(f"Selecting from: :green[{bucket_name}]", object_list, index=None)
    st.caption(f"You selected: :blue[{selected_obj}]")

st.divider()

# Displaying the selected Amazon S3 object
if selected_obj is None:
    st.caption("Please select an object from S3 bucket")
else:
    response = s3.get_object(Bucket=bucket_name, Key=selected_obj)
    body = response['Body'].read()

    # Displaying the object based on the file type
    if selected_obj.endswith(".png") or selected_obj.endswith(".jpg"):
        st.image(BytesIO(body))
    elif selected_obj.endswith(".pdf"):
        pdf_viewer(body)
    else:
        st.write(body.decode('utf-8'))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡 The boto3 library will create an s3 client that will interact with the S3 service. The custom streamlit_pdf_viewer component and the BytesIO module will aid in rendering selected S3 objects to the page.&lt;/p&gt;

&lt;p&gt;Let’s test it with a file upload. In the left column (refer to column 1 in above code), we upload a PitchBook PDF document.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ss0cf2xq9kofn3tt9bv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ss0cf2xq9kofn3tt9bv.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we go to S3 to verify if the PDF has been uploaded successfully. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5vj3j2uzl0n78pcfk5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5vj3j2uzl0n78pcfk5y.png" alt="Image description" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.3 Build a generative AI image generator&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Refer to below Python code to create Amazon Bedrock Titan Image Generator UI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
import base64
import streamlit as st
from io import BytesIO

# Amazon Bedrock client
bedrock = boto3.client('bedrock-runtime')
bedrock_model_id = "amazon.titan-image-generator-v1"

# Convert image data to BytesIO object
def decode_image(image_data):
    image_bytes = base64.b64decode(image_data)
    return BytesIO(image_bytes)

# Invoke Bedrock image model to generate image
def generate_image(prompt):
    body = json.dumps(
        {
            "taskType": "TEXT_IMAGE",
            "textToImageParams": {
                "text":prompt
            },
            "imageGenerationConfig": {
                "numberOfImages": 1,
                "quality": "standard",
                "height": 768,
                "width": 768,
                "cfgScale": 8.0,
                "seed": 100             
            }
        }
    )

    response = bedrock.invoke_model(
                modelId="bedrock_model_id",
                accept="application/json", 
                contentType="application/json",
                body=body
            )

    response_body = json.loads(response["body"].read())
    image_data = response_body["images"][0]

    return decode_image(image_data)

# Streamlit UI

with st.container():
    st.header("Amazon Bedrock Titan Image Generator", anchor=False, divider="rainbow")

    input_column, result_column = st.columns(2)

    # Text Input
    with input_column:
        st.subheader("Describe an image", anchor=False)
        prompt_text = st.text_input("Example: Two dogs sharing a bowl of spaghetti", key="prompt")

        # Generate and Clear buttons

        # Clear field function accessing session state
        def clear_field(prompt):
            st.session_state.prompt = prompt

        generate, clear = st.columns(2, gap="small")

        with generate:
            generate_button = st.button("Generate", use_container_width=True)
        # Clear field callback 
        with clear:
            st.button('Clear', on_click=clear_field, args=[''], use_container_width=True)

    # Resulting image column
    with result_column:
        st.subheader("Generated image", anchor=False)
        st.caption('Your image will appear here.')
        if generate_button:
            # Displays spinner + message while executing the generate_image function
            with st.spinner("Generating image..."):
                image = generate_image(prompt_text)
            st.image(image, use_column_width=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💡 The &lt;code&gt;generate_image&lt;/code&gt; function passes the &lt;code&gt;imageGenerationConfig&lt;/code&gt;, &lt;code&gt;taskType&lt;/code&gt;, and user &lt;code&gt;prompt&lt;/code&gt; to the Bedrock &lt;code&gt;invoke_model&lt;/code&gt; method. The method will return a JSON body that is parsed to retrieve the generated image.&lt;/p&gt;

&lt;p&gt;The generated image is decoded using the &lt;code&gt;decode_image&lt;/code&gt; function and returned.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;clear_field&lt;/code&gt; function interacts with the Streamlit session state. It accepts a prompt and updates the value of the &lt;code&gt;st.session_state.prompt&lt;/code&gt;. Session state is used to store and persist state that can be manipulated with the use of callback functions. &lt;code&gt;clear_field&lt;/code&gt; is a callback function that will get invoked when a user clicks the &lt;code&gt;Clear&lt;/code&gt; button.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;result_column&lt;/code&gt; checks if the &lt;code&gt;generate_button&lt;/code&gt; value is true and calls the &lt;code&gt;st.spinner&lt;/code&gt; element to display a temporary message as the &lt;code&gt;generate_image&lt;/code&gt; function works in the background.&lt;/p&gt;

&lt;p&gt;The resulting image is then passed to a &lt;code&gt;st.image&lt;/code&gt; element to render it on the page.&lt;/p&gt;

&lt;p&gt;Let’s try the image generator to create an image with entered prompt:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk6ehwpoxvoxcb7qpdlg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk6ehwpoxvoxcb7qpdlg.png" alt="Image description" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this article, we have&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduce &lt;code&gt;Streamlit&lt;/code&gt; python library to create data app and interactive app with simple minimal codes&lt;/li&gt;
&lt;li&gt;Deployed the Streamlit application to an &lt;code&gt;Amazon EC2&lt;/code&gt; instance.&lt;/li&gt;
&lt;li&gt;Interacted with each application webpage with its interactive features&lt;/li&gt;
&lt;li&gt;Go through the application codes that integrate &lt;code&gt;Amazon Bedrock&lt;/code&gt; and &lt;code&gt;Amazon S3&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Reference and further readings:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What is streamlit. &lt;a href="https://github.com/streamlit/streamlit" rel="noopener noreferrer"&gt;https://github.com/streamlit/streamlit&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;What is Amazon Bedrock? &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;https://aws.amazon.com/bedrock/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Amazon Bedrock Developer Experience. &lt;a href="https://aws.amazon.com/bedrock/developer-experience/" rel="noopener noreferrer"&gt;https://aws.amazon.com/bedrock/developer-experience/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Quickly build Generative AI applications with Amazon Bedrock &lt;a href="https://community.aws/content/2ddby9SeCKALvSz0CWUtx4Q4fPX/amazon-bedrock-quick-start?lang=en" rel="noopener noreferrer"&gt;https://community.aws/content/2ddby9SeCKALvSz0CWUtx4Q4fPX/amazon-bedrock-quick-start?lang=en&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Real-Time User Behavior Insights with Amazon Kinesis and Apache Flink: A Guide to Sessionizing Clickstream Data</title>
      <dc:creator>Abdullah Paracha</dc:creator>
      <pubDate>Fri, 19 Jul 2024 19:15:45 +0000</pubDate>
      <link>https://dev.to/aws-builders/real-time-user-behavior-insights-with-amazon-kinesis-and-apache-flink-a-guide-to-sessionizing-clickstream-data-5gcc</link>
      <guid>https://dev.to/aws-builders/real-time-user-behavior-insights-with-amazon-kinesis-and-apache-flink-a-guide-to-sessionizing-clickstream-data-5gcc</guid>
      <description>&lt;p&gt;👉🏻 This walkthrough is to stimulate a real world use case of sessionizing the clickstream data using Amazon Managed Service for Apache Flink, and storing sessions in an Amazon DynamoDB Table with AWS Lambda. &lt;br&gt;
&lt;strong&gt;Note:&lt;/strong&gt; Some processes and resources have been pre-configured.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Case Scenario&lt;/li&gt;
&lt;li&gt;AWS Tools and Resources&lt;/li&gt;
&lt;li&gt;Process and Architecture&lt;/li&gt;
&lt;li&gt;Step by Step Configuration and Walkthrough

&lt;ul&gt;
&lt;li&gt;Starting a Managed Apache Flink Studio Notebook Application&lt;/li&gt;
&lt;li&gt;Connecting to the Virtual Machine using EC2 Instance Connect 
and Simulating a Real-Time Clickstream&lt;/li&gt;
&lt;li&gt;Sessionizing the Clickstream Data using Amazon Managed 
Service for Apache Flink&lt;/li&gt;
&lt;li&gt;Storing Sessions in an Amazon DynamoDB Table with AWS Lambda&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Streaming data is data that is generated continuously by thousands of data sources, such as log files generated by customers using your mobile or web applications, ecommerce purchases, in-game player activity, information from social networks, etc.. [1] &lt;/p&gt;

&lt;p&gt;Sessionizing clickstream data involves grouping user interactions into sessions, where each session represents a sequence of activities by a user within a specific time window. There are many use cases of sessionizing clickstream data, including user behavior analysis, personalized recommendations, performance measurement, and fraud detection, etc..&lt;/p&gt;

&lt;p&gt;Sessionizing clickstream data allows for real-time processing and analysis, providing immediate insights into user behavior and enabling prompt actions based on these insights. This real-time capability is essential for applications that require up-to-the-minute data to enhance user experiences and operational efficiency.&lt;/p&gt;

&lt;p&gt;In the following use case, we will be stimulating a real-time clickstream data, sessionizing clickstream data using &lt;code&gt;Amazon Managed Service for Apache Flink&lt;/code&gt;, and storing sessions data in &lt;code&gt;Amazon DynamoDB&lt;/code&gt; table with &lt;code&gt;AWS Lambda&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Use Case Scenario:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;code&gt;XShop&lt;/code&gt;, an online retail company, wants to enhance its customer experience by gaining deeper insights into user behavior on its website. They aim to achieve this by analyzing clickstream data in real-time to understand user interactions, optimize marketing strategies, and improve site performance. &lt;/p&gt;

&lt;p&gt;As the &lt;code&gt;data engineering team&lt;/code&gt;of &lt;code&gt;Xshop&lt;/code&gt;, we are targeting to build a solution on AWS to collect clickstream data and store session data for downstream processing and analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. AWS tools and resources:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Amazon Managed Service for Apache Flink&lt;/code&gt; is a fully managed service that enables you to perform analysis using SQL and other tools on streaming data in real time. There are no servers and clusters to manage, and there is no compute and storage infrastructure to set up. You pay only for the resources you use. [2]&lt;/p&gt;

&lt;p&gt;Use cases for &lt;code&gt;Amazon Managed Service for Apache Flink&lt;/code&gt; include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Streaming extract, transform, and load (ETL) jobs&lt;/li&gt;
&lt;li&gt;Create real-time log analysis&lt;/li&gt;
&lt;li&gt;Ad-tech and digital marketing analysis&lt;/li&gt;
&lt;li&gt;Perform stateful processing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Amazon DynamoDB&lt;/code&gt; is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. You can use &lt;code&gt;Amazon DynamoDB&lt;/code&gt; to create a database table that can store and retrieve any amount of data, and serve any level of request traffic. [3]&lt;/p&gt;

&lt;p&gt;&lt;code&gt;AWS Lambda&lt;/code&gt; provides compute service that runs your code in response to events and automatically manages the compute resources. You can combine &lt;code&gt;AWS Lambda&lt;/code&gt; with other AWS services, preprocess data before feeding data into machine learning mode, or execute code in response to trigger changes in data. [4]&lt;/p&gt;

&lt;p&gt;In this use case, we will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;Amazon Kinesis&lt;/code&gt; and &lt;code&gt;Amazon Managed Service for Apache Flink&lt;/code&gt; to analyze clickstream data&lt;/li&gt;
&lt;li&gt;Create an &lt;code&gt;AWS Lambda&lt;/code&gt; function that adds records to an &lt;code&gt;Amazon DynamoDB&lt;/code&gt; table&lt;/li&gt;
&lt;li&gt;Configure Amazon Kinesis to send results to your AWS Lambda function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Process &amp;amp; Architecture:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the data engineering team, we will use Amazon Kinesis and Amazon Managed Service for Apache Flink to sessionize sample clickstream data and output it to DynamoDB using an AWS Lambda function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0afegpw1vs3qwmxgu2av.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0afegpw1vs3qwmxgu2av.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Step-by-Step configuration &amp;amp; walkthrough:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.1 Starting a Managed Apache Flink Studio Notebook Application&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Currently, we have 2 data streams in Kinesis:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(1) click-stream data, which will ingest incoming click events from a data source&lt;/p&gt;

&lt;p&gt;(2) session-stream data, which will ingest sessionized click stream data to be consumed by an AWS Lambda function&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxc7xpj4t8ebx10reswf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxc7xpj4t8ebx10reswf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Managed Apache Flink, create a studio notebook to process the click events and sessionize them using Structured Query Language (SQL).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fclvfo5se7bhnl9ye62ci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fclvfo5se7bhnl9ye62ci.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.2 Connecting to the Virtual Machine using EC2 Instance Connect and Simulating a Real-Time Clickstream:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to an EC2 instance &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj6frnk9kbbfrzqkdfxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj6frnk9kbbfrzqkdfxx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a JSON file for a click event in the shell&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

echo '{
  "user_id": "$USER_ID",
  "event_timestamp": "$EVENT_TIMESTAMP",
  "event_name": "$EVENT_NAME",
  "event_type": "click",
  "device_type": "desktop"
}' &amp;gt; click.json


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Hint:&lt;br&gt;
💡 - built-in Bash command &lt;strong&gt;echo:&lt;/strong&gt; print a JSON template&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bash shell called redirection: redirects the output of the echo command to a file (creating it if doesn't exist) called &lt;strong&gt;click.json&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
&lt;li&gt;To put records into Kinesis and simulate a clickstream&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

DATA_STREAM="click-stream"
USER_IDS=(user1 user2 user3)
EVENTS=(checkout search category detail navigate)
for i in $(seq 1 5000); do
    echo "Iteration: ${i}"
    export USER_ID="${USER_IDS[RANDOM%${#USER_IDS[@]}]}";
    export EVENT_NAME="${EVENTS[RANDOM%${#EVENTS[@]}]}";
    export EVENT_TIMESTAMP=$(($(date +%s) * 1000))
    JSON=$(cat click.json | envsubst)
    echo $JSON
    aws kinesis put-record --stream-name $DATA_STREAM --data "${JSON}" --partition-key 1 --region us-west-2
    session_interval=15
    click_interval=2
    if ! (($i%60)); then
        echo "Sleeping for ${session_interval} seconds" &amp;amp;&amp;amp; sleep ${session_interval}
    else
        echo "Sleeping for ${click_interval} second(s)" &amp;amp;&amp;amp; sleep ${click_interval}
    fi
done


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Hint:&lt;br&gt;
💡 - Setup of sample user ids and event types at the beginning&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A loop that will execute 5000 times and a sleep statement&lt;/li&gt;
&lt;li&gt;Statements that randomly select a user id and an event type, and assign them along with the current timestamp to variables&lt;/li&gt;
&lt;li&gt;A statement that uses the &lt;strong&gt;envsubst&lt;/strong&gt; command to substitute defined environment variables in the JSON template&lt;/li&gt;
&lt;li&gt;A statement invoking the AWS command-line interface tool, putting the templated JSON record into the Kinesis Data Stream&lt;/li&gt;
&lt;li&gt;A condition at the end of the loop that either sleeps for a few seconds or, periodically for longer, simulating the end of a session&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
&lt;li&gt;The templated JSON and also the JSON response from Kinesis for each record put into the Data Stream&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F920yw37hygy8r978yl8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F920yw37hygy8r978yl8w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the studio notebook, open in Apache Zeppelin and create a new notebook&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd64z3prnghi5eg4xto3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd64z3prnghi5eg4xto3l.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create tables for the Kinesis Data Streams&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

%flink.ssql(type = update)
DROP TABLE IF EXISTS click_stream;
CREATE TABLE click_stream (
          user_id STRING,
          event_timestamp BIGINT,
          event_name STRING,
          event_type STRING,
          device_type STRING,
          event_time AS TO_TIMESTAMP(FROM_UNIXTIME(event_timestamp/1000)),
            WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND
) WITH (
  'connector' = 'kinesis',
  'stream' = 'click-stream',
  'aws.region' = 'us-west-2',
  'scan.startup.mode' = 'latest-offset',
  'format' = 'json'
);
DROP TABLE IF EXISTS session_stream;
CREATE TABLE session_stream (
          user_id STRING,
          session_timestamp BIGINT,
          session_time TIMESTAMP(3),
          session_id STRING,
          event_type STRING,
          device_type STRING
) PARTITIONED BY (user_id) WITH (
  'connector' = 'kinesis',
  'stream' = 'session-stream',
  'aws.region' = 'us-west-2',
  'format' = 'json'
);


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Hint:&lt;br&gt;
💡 - in WITH clause for each table, specify configuration options to configure a connector. In this case, the connector is kinesis denoting that these tables are to be backed with AWS Kinesis Data Streams. The value of the stream option for both match the names of the Kinesis Data Streams you observed earlier.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;view data from the stream&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

%flink.ssql
SELECT * FROM click_stream LIMIT 10;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;The data is being put into the click-stream Kinesis data stream by the bash code ran earlier on the Amazon EC2 instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnz1xr7bc0xony2gwpxyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnz1xr7bc0xony2gwpxyl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.3 Sessionizing the Clickstream Data using Amazon Managed Service for Apache Flink:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Managed Service for Apache Flink allows you to continuously run SQL on streaming data, processing the data in real-time, and sending the results to many different destinations.  &lt;/p&gt;

&lt;p&gt;We will use SQL to determine the beginning of a new session for a user in your simulated click stream.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identify events that start a new session&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

%flink.ssql
SELECT 
  *, 
  CASE WHEN event_timestamp - LAG(event_timestamp) OVER (
    PARTITION BY user_id 
    ORDER BY 
      event_time
  ) &amp;gt;= (10 * 1000) THEN 1 ELSE 0 END as new_session 
FROM 
  click_stream;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;💡 This SQL query uses the &lt;code&gt;LAG&lt;/code&gt; SQL function to compare a record with a previous record. The &lt;code&gt;PARTITION BY user_id&lt;/code&gt; clause means it will restrict comparisons to records where the &lt;code&gt;user_id&lt;/code&gt; is the same.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;ORDER BY event_time&lt;/code&gt; clause means records will be compared in ascending order.&lt;/p&gt;

&lt;p&gt;When the difference between the &lt;code&gt;event_timestamp&lt;/code&gt; field of two records is greater than ten seconds the &lt;code&gt;new_session&lt;/code&gt; field will be 1, denoting the start of a new session, When &lt;code&gt;new_ession&lt;/code&gt; is 0, it denotes the event is a continuation of a session.&lt;/p&gt;

&lt;p&gt;Notice that the comparison multiplies ten by a thousand. This is because the event_timestamp field is &lt;a href="https://en.wikipedia.org/wiki/Unix_time" rel="noopener noreferrer"&gt;Unix time&lt;/a&gt; and includes milliseconds.  So multiplying by a thousand is required to get ten seconds in milliseconds.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To get only the new session boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

%flink.ssql
SELECT 
  *, 
  user_id || '_' || CAST(
    SUM(new_session) OVER (
      PARTITION BY user_id 
      ORDER BY 
        event_time
    ) AS STRING
  ) AS session_id 
FROM 
  (
    SELECT 
      *, 
      CASE WHEN event_timestamp - LAG(event_timestamp) OVER (
        PARTITION BY user_id 
        ORDER BY 
          event_time
      ) &amp;gt;= (10 * 1000) THEN 1 ELSE 0 END as new_session 
    FROM 
      click_stream
  ) 
WHERE 
  new_session = 1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This query wraps the previous query with an insert statement that puts records into the session_stream table. Because this query produces no output and is expected to be long-running, you will see a Duration displayed at the bottom of the paragraph that shows how long the query has been running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1g0qo06hkohuf7zq9et.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1g0qo06hkohuf7zq9et.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💡 Amazon Managed Service for Apache Flink can be customized using Java JAR files you provide yourself. This means you could use Java code to load or deliver data to and from any network-accessible system. As an example, data could be loaded from a legacy API that has a Java library available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.4 Storing Sessions in an Amazon DynamoDB Table with AWS Lambda:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In AWS Lambda dashboard, in source code session, update source code:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from __future__ import print_function
import boto3
import base64
from json import loads
dynamodb_client = boto3.client('dynamodb')
table_name = "Table_Name"   #change to your table name
def lambda_handler(event, context):
    records = event['Records']
    output = []
    success = 0
    failure = 0
    for record in records:
        try:
            payload = base64.b64decode(record['kinesis']['data'])
            data_item = loads(payload)
            ddb_item = {
                'session_id': {'S': data_item['session_id']},
                'session_time': {'S': data_item['session_time']},
                'user_id': {'S': data_item['user_id']}
            }
            dynamodb_client.put_item(TableName=table_name, Item=ddb_item)
            success += 1
            output.append({'recordId': record['eventID'], 'result': 'Ok'})
        except Exception:
            failure += 1
            output.append({'recordId': record['eventID'], 'result': 'DeliveryFailed'})
    print('Successfully delivered {0} records, failed to deliver {1} records'.format(success, failure))
    return {'records': output}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Configure the session data stream events as an event source for this function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Trigger configuration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Select a source&lt;/strong&gt;: Select &lt;strong&gt;Kinesis&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kinesis stream&lt;/strong&gt;: Select &lt;strong&gt;session-stream&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovpfpe1vfmnybzat81dm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovpfpe1vfmnybzat81dm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In DynamoDB, go to Explore Items section&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuh3f4xnfwwq9y0u4snb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuh3f4xnfwwq9y0u4snb1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are processing the records in the &lt;strong&gt;session-stream&lt;/strong&gt; Kinesis data stream with an AWS Lambda function and putting the records into a DynamoDB table. &lt;/p&gt;

&lt;p&gt;In this lab, you started a Kinesis Data Analytics application, simulated a real-time click-stream, sessionized the click-stream using your Kinesis Data Analytics application, and finally, you processed the output of your Kinesis Data Analytics stream with a Lambda function that sends the data to a DynamoDB table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this walkthrough, we have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Started a Kinesis Data Analytics application&lt;/li&gt;
&lt;li&gt;Simulated a real-time clickstream&lt;/li&gt;
&lt;li&gt;Sessionized the clickstream using the Kinesis Data Analytics 
application&lt;/li&gt;
&lt;li&gt;Processed the output of the Kinesis Data Analytics stream with a 
Lambda function&lt;/li&gt;
&lt;li&gt;Sent the processed data to a DynamoDB table using the Lambda 
function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Reference and Further Readings:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;[1] &lt;strong&gt;What is streaming data?&lt;/strong&gt; &lt;a href="https://aws.amazon.com/streaming-data/#:~:text=Streaming%20data%20includes%20a%20wide,devices%20or%20instrumentation%20in%20data" rel="noopener noreferrer"&gt;https://aws.amazon.com/streaming-data/#:~:text=Streaming data includes a wide,devices or instrumentation in data&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[2] &lt;strong&gt;Amazon Managed Service for Apache Flink&lt;/strong&gt; &lt;a href="https://aws.amazon.com/managed-service-apache-flink/" rel="noopener noreferrer"&gt;https://aws.amazon.com/managed-service-apache-flink/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[3] &lt;strong&gt;Amazon DynamoDB&lt;/strong&gt; &lt;a href="https://docs.aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;https://aws.amazon.com/managed-service-apache-flink/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[4] &lt;strong&gt;AWS Lambda&lt;/strong&gt; &lt;a href="https://aws.amazon.com/pm/lambda/?gclid=Cj0KCQjw-uK0BhC0ARIsANQtgGP7fWsSMGCsLXcj3w6sDSsTJpmPc4m0SmMWTzgw_xFKWQcPuUdh1IMaAtKmEALw_wcB&amp;amp;trk=d87368f2-b0ac-4e30-804b-b10e2d25d291&amp;amp;sc_channel=ps&amp;amp;ef_id=Cj0KCQjw-uK0BhC0ARIsANQtgGP7fWsSMGCsLXcj3w6sDSsTJpmPc4m0SmMWTzgw_xFKWQcPuUdh1IMaAtKmEALw_wcB:G:s&amp;amp;s_kwcid=AL!4422!3!651612781100!e!!g!!aws%20lambda!19836398320!150095228874" rel="noopener noreferrer"&gt;https://aws.amazon.com/pm/lambda/?gclid=Cj0KCQjw-uK0BhC0ARIsANQtgGP7fWsSMGCsLXcj3w6sDSsTJpmPc4m0SmMWTzgw_xFKWQcPuUdh1IMaAtKmEALw_wcB&amp;amp;trk=d87368f2-b0ac-4e30-804b-b10e2d25d291&amp;amp;sc_channel=ps&amp;amp;ef_id=Cj0KCQjw-uK0BhC0ARIsANQtgGP7fWsSMGCsLXcj3w6sDSsTJpmPc4m0SmMWTzgw_xFKWQcPuUdh1IMaAtKmEALw_wcB:G:s&amp;amp;s_kwcid=AL!4422!3!651612781100!e!!g!!aws lambda!19836398320!150095228874&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS PartyRock Event Planner Assistant</title>
      <dc:creator>Abdullah Paracha</dc:creator>
      <pubDate>Sat, 09 Mar 2024 15:29:46 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-partyrock-event-planner-assistant-4il</link>
      <guid>https://dev.to/aws-builders/aws-partyrock-event-planner-assistant-4il</guid>
      <description>&lt;p&gt;&lt;strong&gt;Concept: Event Planner Assistant&lt;/strong&gt;&lt;br&gt;
The "Event Planner Assistant" is a generative AI application designed to help users plan their virtual parties. It uses AI to suggest party themes, music playlists, and interactive activities tailored to the host's preferences and the nature of the event.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Theme Suggestion:&lt;/strong&gt; Generates creative party themes based on the type of event and user preferences.&lt;br&gt;
&lt;strong&gt;Playlist Creation:&lt;/strong&gt; Recommends music playlists that match the party's theme and mood.&lt;br&gt;
&lt;strong&gt;Activity Ideas:&lt;/strong&gt; Proposes interactive games and activities suitable for the event.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building the Application on AWS PartyRock&lt;/strong&gt;&lt;br&gt;
Let's imagine the steps and simple code snippets that might be involved in creating such an application on AWS PartyRock, using a mix of pseudocode and descriptions to illustrate the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Set Up User Input Interface&lt;/strong&gt;&lt;br&gt;
First, we create a simple user interface where the host can input details about the party, such as the type of event, preferred music genres, and any specific themes or activities they're interested in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;InputForm:
- EventType: [Birthday, Graduation, Casual Get-together, ...]
- MusicPreferences: [Pop, Rock, Electronic, Jazz, ...]
- ThemePreferences: [Input Text]
- ActivityInterest: [Games, Quizzes, Dance, ...]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Theme Suggestion Logic&lt;/strong&gt;&lt;br&gt;
Using a predefined AI model, the application generates a list of party themes based on the event type and theme preferences.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def generate_theme(event_type, theme_preferences):
    # Imagine calling an AI model here
    suggested_themes = AIModel.generate_themes(event_type, theme_preferences)
    return suggested_themes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Playlist Creation Logic&lt;/strong&gt;&lt;br&gt;
The application uses another AI model to curate a playlist based on the party's theme and the host's music preferences.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def create_playlist(theme, music_preferences):
    # Imagine calling an AI model here
    playlist_links = AIModel.create_playlist(theme, music_preferences)
    return playlist_links
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Activity Ideas Generation&lt;/strong&gt;&lt;br&gt;
Finally, the application suggests interactive activities and games that match the chosen theme and the host's interest in activities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def suggest_activities(theme, activity_interest):
    # Imagine calling an AI model here
    activities = AIModel.suggest_activities(theme, activity_interest)
    return activities
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Compile and Present Suggestions&lt;/strong&gt;&lt;br&gt;
The application compiles the suggestions from each step and presents them to the user in an interactive format, possibly with options to customize further or explore alternatives.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user_input = gather_user_input()
theme_suggestions = generate_theme(user_input.eventType, user_input.themePreferences)
playlist = create_playlist(theme_suggestions[0], user_input.musicPreferences)  # Assume first theme
activities = suggest_activities(theme_suggestions[0], user_input.activityInterest)

present_to_user(theme_suggestions, playlist, activities)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
While this example uses pseudocode and assumes the existence of AI models and a platform like AWS PartyRock, it illustrates how one could conceptualize and design an AI-driven application for event planning. The real power of generative AI applications lies in their ability to personalize and enhance experiences, offering unique and tailored suggestions that cater to individual preferences and needs.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building Serverless Systems - Serverlesspresso</title>
      <dc:creator>Abdullah Paracha</dc:creator>
      <pubDate>Sat, 20 Jan 2024 13:36:44 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-serverless-systems-serverlesspresso-16cg</link>
      <guid>https://dev.to/aws-builders/building-serverless-systems-serverlesspresso-16cg</guid>
      <description>&lt;p&gt;In the ever-evolving world of cloud computing, serverless architecture has emerged as a game changer, offering a new paradigm for building and deploying applications. This blog delves into the concept of serverless systems, focusing on a unique approach we've named "Serverlesspresso."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Serverless Computing?&lt;/strong&gt;&lt;br&gt;
Serverless computing is a cloud-computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. Essentially, it allows developers to build and run applications and services without worrying about the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of Serverless Computing&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Cost-Effective:&lt;/strong&gt; You only pay for the resources you use.&lt;br&gt;
&lt;strong&gt;Scalability:&lt;/strong&gt; Automatically scales with your application's needs.&lt;br&gt;
&lt;strong&gt;Maintenance-Free:&lt;/strong&gt; No need to manage servers or infrastructure.&lt;br&gt;
&lt;strong&gt;Faster Time-to-Market:&lt;/strong&gt; Quicker deployment and update processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introducing Serverlesspresso&lt;/strong&gt;&lt;br&gt;
Serverlesspresso is our coined term for building serverless systems that are as swift and efficient as your morning espresso. It's about creating systems that are not only quick to deploy but also robust and scalable, much like a well-brewed cup of coffee.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Components of Serverlesspresso&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Event-Driven Architecture:&lt;/strong&gt; Responds to events in real-time, like a barista responding to an order.&lt;br&gt;
&lt;strong&gt;Microservices-Based:&lt;/strong&gt; Like different coffee blends, each service performs a specific function.&lt;br&gt;
&lt;strong&gt;API Gateway:&lt;/strong&gt; The counter where you place your coffee order, handling and routing requests.&lt;br&gt;
&lt;strong&gt;FaaS (Function as a Service):&lt;/strong&gt; The coffee machine, executing specific functions as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpbc4orj7xznqmfiup5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpbc4orj7xznqmfiup5h.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Example:&lt;/strong&gt; A Serverless Coffee Shop&lt;br&gt;
Imagine a coffee shop where each part of your coffee-making process is a separate, serverless function: one for grinding beans, another for boiling water, and so on. This is Serverlesspresso in action, where each function collaborates to deliver your perfect cup of coffee, or in tech terms, a seamless user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages in this Scenario&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Efficiency:&lt;/strong&gt; Each part works independently and only when needed.&lt;br&gt;
&lt;strong&gt;Scalability:&lt;/strong&gt; Can handle one cup or a hundred without additional setup.&lt;br&gt;
&lt;strong&gt;Cost-Effectiveness:&lt;/strong&gt; Only utilizes resources for each specific task.&lt;br&gt;
&lt;strong&gt;Challenges and Considerations&lt;/strong&gt;&lt;br&gt;
While serverless systems offer numerous benefits, they're not without challenges:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cold Starts:&lt;/strong&gt; The initial latency when a function is first invoked.&lt;br&gt;
&lt;strong&gt;Debugging and Monitoring:&lt;/strong&gt; More complex in a distributed environment.&lt;br&gt;
&lt;strong&gt;Vendor Lock-in:&lt;/strong&gt; Dependence on specific cloud providers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Future of Serverlesspresso&lt;/strong&gt;&lt;br&gt;
The serverless world is rapidly evolving, with increasing focus on reducing cold starts, enhancing monitoring tools, and mitigating vendor lock-in issues. The future looks promising, with serverless becoming more adaptable and suited for a broader range of applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Serverlesspresso symbolizes an efficient, scalable, and cost-effective approach to building serverless systems. It's an approach where the architecture is as streamlined and satisfying as your morning espresso. As we move forward, serverless will continue to revolutionize how we think about and deploy applications in the cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9qga72m3rhq456vh33z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9qga72m3rhq456vh33z.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Serverlesspresso isn't just a concept; it's the future of cloud computing, brewing one efficient, scalable solution at a time. Let's embrace this change and watch as our digital world becomes as efficient as our favorite coffee shop!&lt;/p&gt;

</description>
      <category>cloudcomputing</category>
      <category>microservices</category>
      <category>serverlesspresso</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS Cost Management: Strategies and Examples</title>
      <dc:creator>Abdullah Paracha</dc:creator>
      <pubDate>Sat, 20 Jan 2024 13:27:40 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-cost-management-strategies-and-examples-5gma</link>
      <guid>https://dev.to/aws-builders/aws-cost-management-strategies-and-examples-5gma</guid>
      <description>&lt;p&gt;Managing costs effectively in Amazon Web Services (AWS) is crucial for maximizing the benefits of cloud computing while maintaining budget control. AWS offers a range of tools and features designed to help users monitor and optimize their spending. In this blog, we'll explore some of these tools and discuss strategies for cost management, including real-world examples and visual aids.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding AWS Pricing&lt;/strong&gt;&lt;br&gt;
AWS follows a pay-as-you-go pricing model, meaning you pay only for the resources you use. This model offers flexibility but also requires careful monitoring and management to avoid unexpected costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Components of AWS Pricing:&lt;/strong&gt;&lt;br&gt;
Compute Resources: Charges for EC2 instances depend on the type, size, and region.&lt;br&gt;
&lt;strong&gt;Storage Costs:&lt;/strong&gt; S3 and other storage services charge based on the amount of data stored and accessed.&lt;br&gt;
&lt;strong&gt;Data Transfer:&lt;/strong&gt; AWS charges for data transfer in and out of their services, especially across regions.&lt;br&gt;
&lt;strong&gt;Additional Services:&lt;/strong&gt; Services like RDS, Lambda, and others have their unique pricing structures.&lt;br&gt;
&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
Consider a scenario where you're running an EC2 instance for web hosting, using S3 for storage, and RDS for database services. The costs will include EC2 instance hours, data storage, and database instance costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16h1g134v77ww46nlqss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16h1g134v77ww46nlqss.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Management Tools in AWS&lt;/strong&gt;&lt;br&gt;
AWS provides several tools to help manage and track your spending:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Cost Explorer:&lt;/strong&gt; Allows you to visualize and manage AWS spending and usage over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Budgets:&lt;/strong&gt; Enables setting custom budgets to track service costs and usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Trusted Advisor:&lt;/strong&gt; Offers recommendations for cost optimization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F917lzvnas4cdgx9yxtue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F917lzvnas4cdgx9yxtue.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Effective Cost Management Strategies&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Right-Sizing:&lt;/strong&gt; Regularly review and adjust your AWS resources to match your actual needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reserved Instances and Savings Plans:&lt;/strong&gt; Purchase reserved instances or savings plans for services you use continuously to get significant discounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor and Optimize Usage:&lt;/strong&gt; Use AWS Cost Explorer and Trusted Advisor to identify underutilized resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implement Cost Allocation Tags:&lt;/strong&gt; Use tags to organize and track costs by project, department, or environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automate to Save:&lt;/strong&gt; Utilize AWS Lambda and other automation tools to shut down idle resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Example:&lt;/strong&gt; E-commerce Platform Cost Optimization&lt;br&gt;
An e-commerce company initially faced high AWS costs due to over-provisioned resources and inefficient data storage. By implementing the following steps, they reduced their AWS bill by 30%:&lt;/p&gt;

&lt;p&gt;Switched to Reserved Instances for their stable EC2 needs.&lt;br&gt;
Employed Amazon S3 lifecycle policies to archive older data to less expensive storage classes.&lt;br&gt;
Set up AWS Budgets to monitor and alert for overages.&lt;br&gt;
This approach demonstrates the impact of strategic cost management in AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrxz4xh0f6whdcso08kl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrxz4xh0f6whdcso08kl.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Effective cost management in AWS is a dynamic and ongoing process. By leveraging AWS's cost management tools and adopting best practices, businesses can optimize their cloud spending without sacrificing performance or scalability. Regular reviews and adjustments, aligned with business needs, are key to maximizing the benefits of AWS while controlling costs.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>costmanagment</category>
      <category>awscloud</category>
    </item>
    <item>
      <title>AWS Pinpoint with Lambda Function</title>
      <dc:creator>Abdullah Paracha</dc:creator>
      <pubDate>Tue, 07 Mar 2023 08:55:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-pinpoint-with-lambda-n01</link>
      <guid>https://dev.to/aws-builders/aws-pinpoint-with-lambda-n01</guid>
      <description>&lt;p&gt;AWS Pinpoint is a cloud-based marketing automation service offered by Amazon Web Services (AWS). It enables businesses to engage with their customers across multiple channels including email, SMS, push notifications, and voice messages. With Pinpoint, businesses can create targeted campaigns, track user engagement, and analyze campaign performance.&lt;/p&gt;

&lt;p&gt;Overall, AWS Pinpoint is a powerful marketing automation tool that can help businesses to engage with their customers effectively and drive business growth.&lt;/p&gt;

&lt;p&gt;Pinpoint offers various features such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Audience segmentation:&lt;/strong&gt; Pinpoint enables businesses to segment their audience based on various criteria such as demographics, location, behavior, and user attributes. This helps businesses to create targeted campaigns that are personalized and relevant to their audience.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-channel messaging:&lt;/strong&gt; Pinpoint supports multiple messaging channels such as email, SMS, push notifications, and voice messages. This allows businesses to reach their customers on their preferred channels.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Campaign management:&lt;/strong&gt; Pinpoint provides a user-friendly interface for creating, scheduling, and managing campaigns. It also offers A/B testing and analytics to help businesses optimize their campaigns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analytics and reporting:&lt;/strong&gt; Pinpoint provides detailed analytics and reporting on campaign performance, user engagement, and customer behavior. This helps businesses to understand their customers better and make data-driven decisions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrations:&lt;/strong&gt; Pinpoint integrates with other AWS services such as Amazon S3, Amazon Kinesis, and Amazon Redshift. It also supports third-party integrations with popular marketing automation tools such as Salesforce, Marketo, and Hubspot.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Set Up Email&lt;/strong&gt;&lt;br&gt;
Type an email address that you will use to send email. For example, you can use your personal email address, or your work email address. Click the Verify button and keep this page open&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6haz888nk6kakcaolyru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6haz888nk6kakcaolyru.png" alt=" " width="800" height="771"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure you have one verified email addresses by following the process in the Configure Email section&lt;/li&gt;
&lt;li&gt;Download the &lt;a href="https://static.us-east-1.prod.workshops.aws/public/fced5058-639b-4d68-97c5-e07ce1c5141e/static/import_customers/console_upload_segment_csv.csv" rel="noopener noreferrer"&gt;sample csv&lt;/a&gt; file and open it in a text editor on your computer.&lt;/li&gt;
&lt;li&gt;Add/edit users using the wildcard local parts notation you saw when you sent a test message.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Edit the Address column in the CSV file to reflect one of the email addresses you verified in the Configure Email section. Example: If you verified &lt;a href="mailto:abdullah.qadoos@systemsltd.com"&gt;abdullah.qadoos@systemsltd.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Import a Segment&lt;/strong&gt;&lt;br&gt;
To import a segment, you define the endpoints or user IDs that belong to your segment in a comma-separated values (CSV) or JSON file. Then you import the file into Amazon Pinpoint to create the segment.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Choose Create a segment &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodl5r51g86ef9rjjg4vi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodl5r51g86ef9rjjg4vi.png" alt=" " width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Import a segment&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4d2s5pppqyhx6sz8z8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4d2s5pppqyhx6sz8z8o.png" alt=" " width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Upload files from your computer and drop the CSV file you just saved in the last step into the Drop files here area, or click Choose files and browse to your CSV import file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide a descriptive name for your segment. Click Create segment to complete the import.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z6ogyp2t4p7ji2kw1en.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z6ogyp2t4p7ji2kw1en.png" alt=" " width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After a couple of seconds your segment will be imported. Click on the segment name to view its details.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To view the new endpoint created, you can either &lt;a href="https://docs.aws.amazon.com/pinpoint/latest/userguide/segments-exporting.html" rel="noopener noreferrer"&gt;export the segment&lt;/a&gt; or navigate to the AWS CloudShell console  and wait till the environment is created. Replace the Amazon Pinpoint application id from the command below then paste it in the AWS CloudShell terminal and press Enter. This should return you a response containing all the information about this endpoint&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws pinpoint get-endpoint --application-id &amp;lt;place Pinpoint application_id here&amp;gt; --endpoint-id 111
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyolr73f7s80xyqfqlp46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyolr73f7s80xyqfqlp46.png" alt=" " width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create &amp;amp; execute an AWS Lambda Function&lt;/strong&gt;&lt;br&gt;
You can create or update an Amazon Pinpoint endpoint via the Amazon Pinpoint REST API, Amplify or one of the AWS SDKs. For this example you will use the AWS Python SDK Boto3 and AWS Lambda function to host and execute the code.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a new AWS Lambda Function&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8vphg1e3om53dyqyxsq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8vphg1e3om53dyqyxsq.png" alt=" " width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Author from scratch, give a Function name, for runtime select Python 3.8 and click on Create function&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gv81srio4vztwwt30v3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gv81srio4vztwwt30v3.png" alt=" " width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Code Source &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Far2k8akfcblckweph7rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Far2k8akfcblckweph7rq.png" alt=" " width="664" height="731"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

client = boto3.client('pinpoint')

def lambda_handler(event, context):

    application_id = event['application_id']
    first_name = event['first_name']
    email_address = event['email_address']
    endpoint_id = event['endpoint_id']
    user_id = event['user_id']  
    age = event['age']
    interests = event['interests']

    response = client.update_endpoint(
    ApplicationId=application_id,
    EndpointId= endpoint_id,
    EndpointRequest={
        'Address': email_address,
        'ChannelType': 'EMAIL',
        'Metrics': {
            'age': age
        },
        'OptOut': 'NONE',
        'User': {
            'UserAttributes': {
                'FirstName': [
                    first_name,
                ],
                'interests': [
                    interests
                ]
            },
            'UserId': user_id
        }
    }
    )

    return response

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Now that the code is ready, you need to allow AWS Lambda Function to perform the update_endpoint operation to your Amazon Pinpoint Project. To do this, you will add an &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html" rel="noopener noreferrer"&gt;IAM policy&lt;/a&gt;  to the AWS Lambda Function role. Navigate to the tab Configuration &amp;gt; Permissions and click on the Role name link. This will take you to the Identity and Access Management (IAM) console and specifically on the Summary page of that role&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9b61y6rle4ms6476tj44.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9b61y6rle4ms6476tj44.png" alt=" " width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Permissions tab select Add permissions and Create inline policy&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02dn33rfw7hiejjulima.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02dn33rfw7hiejjulima.png" alt=" " width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the JSON tab and paste the JSON seen below. Replace the values for AWS-REGION with the AWS region that you have created the Amazon Pinpoint project and AWS-ACCOUNT-ID and select Review policy. To find your AWS-ACCOUNT-ID &lt;a href="https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html" rel="noopener noreferrer"&gt;visit this page&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "UpdateEndpoint",
            "Effect": "Allow",
            "Action": [
                "mobiletargeting:UpdateEndpoint*"
             ],
            "Resource": "arn:aws:mobiletargeting:AWS-REGION:AWS-ACCOUNT-ID:*"
        }
        ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Paste workshop_lambda_policy as the policy name and click Create policy. Once the new policy is created, you should be able to see it under the Permissions policies list in the IAM role summary page&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygyh9g1pk9hmps2p5lfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygyh9g1pk9hmps2p5lfb.png" alt=" " width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Return to the AWS Lambda function Code tab select the dropwdown next to the orange Test button and choose Configure test event. Using this AWS Lambda feature you can create a dummy payload to test your Lambda function. Copy the JSON below and paste it under Event JSON section. Select hello-world as Event template and type test as Event name&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Replace the --- for application_id with your Amazon Pinpoint project id, which you can find by navigating to the Amazon Pinpoint console under the column Project ID&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszpdeiqtwkm7hxwvyjnc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszpdeiqtwkm7hxwvyjnc.png" alt=" " width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Replace the --- for email_address with the verfied email address from the lab Configure email channel. If you verified &lt;a href="mailto:abdullah.qadoos@systemsltd.com"&gt;abdullah.qadoos@systemsltd.com&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll down and select Save&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "application_id" : "---",
    "first_name" : "Jake",
    "email_address" : "---",
    "endpoint_id" : "222",
    "user_id" : "userid2",
    "age": 35,
    "interests": "shirts"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujvr28gsoudaer5825gt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujvr28gsoudaer5825gt.png" alt=" " width="800" height="793"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Return to the AWS Lambda function Code tab and select Deploy, which is located next to the Test button.&lt;br&gt;
To execute the code, click on the Test button. Once the code is executed it will return the response from Amazon Pinpoint, which will contain information about the update_endpoint request.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1svgw0hetemh50j2ohz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1svgw0hetemh50j2ohz.png" alt=" " width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To view the new endpoint created, navigate to the AWS CloudShell console  and wait till the environment is created. Replace the Amazon Pinpoint application-id in the command below, copy &amp;amp; paste it in the AWS CloudShell terminal and press Enter using your keyboard. This should return you a response containing all the information about this endpoint&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws pinpoint get-endpoint --application-id &amp;lt;place Pinpoint application_id here&amp;gt; --endpoint-id 222

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h59jyagudusnetje67t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h59jyagudusnetje67t.png" alt=" " width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>watercooler</category>
    </item>
    <item>
      <title>AWS 5G: Powering Next-Generation Applications</title>
      <dc:creator>Abdullah Paracha</dc:creator>
      <pubDate>Thu, 02 Feb 2023 11:36:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-5g-powering-next-generation-applications-3oh4</link>
      <guid>https://dev.to/aws-builders/aws-5g-powering-next-generation-applications-3oh4</guid>
      <description>&lt;p&gt;The arrival of 5G networks is revolutionizing the way we interact with technology, bringing faster speeds, lower latency, and greater connectivity to a wide range of use cases, from video and audio streaming to gaming and the Internet of Things (IoT). To support the growing demand for 5G services, AWS has introduced AWS Wavelength, a fully managed service that enables developers to build and deploy 5G applications with low latency and high throughput.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sccyzt8327z0myfll25.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sccyzt8327z0myfll25.jpg" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of AWS 5G&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Low Latency: AWS Wavelength provides a high-speed, low-latency connection to the AWS cloud, enabling real-time data processing for use cases that require fast and responsive connections.&lt;/p&gt;

&lt;p&gt;Integration with AWS Services: AWS Wavelength integrates with other AWS services, such as Amazon EC2 and Amazon S3, to provide a complete solution for 5G application development. This enables developers to build and deploy 5G applications with the benefits of the AWS cloud, including security, scalability, and reliability.&lt;/p&gt;

&lt;p&gt;Support for 5G Use Cases: AWS Wavelength is designed to support a wide range of 5G use cases, including augmented and virtual reality, gaming, and autonomous vehicles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of AWS 5G&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fast and Responsive Applications: With its low latency and high throughput, AWS 5G enables the development of fast and responsive applications that can deliver a seamless user experience.&lt;/p&gt;

&lt;p&gt;Cost-Effective: AWS Wavelength is a fully managed service that runs on AWS infrastructure, providing a cost-effective solution for 5G application development.&lt;/p&gt;

&lt;p&gt;Scalable and Reliable: AWS 5G is designed to be scalable and reliable, making it possible to build and deploy applications that can grow and change with the needs of the business.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In conclusion&lt;/strong&gt;, AWS 5G is a powerful tool for developers and organizations looking to take advantage of the benefits of 5G networks and the cloud. With its low latency, high throughput, and integration with other AWS services, AWS Wavelength provides a complete solution for 5G application development, enabling organizations to quickly and easily build and deploy 5G applications.&lt;/p&gt;

</description>
      <category>crypto</category>
      <category>web3</category>
      <category>cryptocurrency</category>
      <category>offers</category>
    </item>
    <item>
      <title>Create and Assume Roles in AWS</title>
      <dc:creator>Abdullah Paracha</dc:creator>
      <pubDate>Tue, 17 Jan 2023 13:19:31 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-and-assume-roles-in-aws-2419</link>
      <guid>https://dev.to/aws-builders/create-and-assume-roles-in-aws-2419</guid>
      <description>&lt;p&gt;In this blog the objective in the AWS environment, utilizing policies and roles in the IAM console to restrict access to AWS resources and conclude by assuming a role and ensuring our policies are correct and that we have completed all objectives.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m7uZxpZ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i6vtzcoqqb61e6ea8f7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m7uZxpZ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i6vtzcoqqb61e6ea8f7y.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;br&gt;
Image: &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i6vtzcoqqb61e6ea8f7y.png"&gt;ACloudGuru&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create the Correct S3 Restricted Policies and Roles&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the S3RestrictedPolicy IAM policy. Ensure only the appconfig buckets are accessible.

&lt;ul&gt;
&lt;li&gt;Select the S3 service and all S3 actions&lt;/li&gt;
&lt;li&gt;Select all resources except bucket&lt;/li&gt;
&lt;li&gt;Add the appconfig bucket names to the policy&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Create the S3RestrictedRole IAM role.

&lt;ul&gt;
&lt;li&gt;Set the trusted entity to another AWS account&lt;/li&gt;
&lt;li&gt;Add your account ID&lt;/li&gt;
&lt;li&gt;For permissions, select the S3RestrictedPolicy&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Revoke the AmazonS3FullAccess access policy from the developergroup.
&lt;/li&gt;
&lt;li&gt;Attach the S3RestrictedPolicy to the dev1 user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Configure IAM So the dev3 User Can Assume the Role&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the AssumeS3Policy IAM policy.

&lt;ul&gt;
&lt;li&gt;Select the STS service&lt;/li&gt;
&lt;li&gt;Select AssumeRole under the write options&lt;/li&gt;
&lt;li&gt;Add the S3RestrictedRole&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Attach the AssumeS3Policy to the dev3 user.&lt;/li&gt;
&lt;li&gt;Assume the S3RestrictedRole as the dev3 user.

&lt;ul&gt;
&lt;li&gt;Log in as the dev3 user&lt;/li&gt;
&lt;li&gt;Switch roles to the S3RestrictedRole&lt;/li&gt;
&lt;li&gt;Verify access in S3&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Docker and Kubernetes</title>
      <dc:creator>Abdullah Paracha</dc:creator>
      <pubDate>Wed, 27 Apr 2022 12:01:53 +0000</pubDate>
      <link>https://dev.to/aws-builders/docker-and-kubernetes-4443</link>
      <guid>https://dev.to/aws-builders/docker-and-kubernetes-4443</guid>
      <description>&lt;p&gt;Docker is a technology for creating and running containers, while Kubernetes is a container orchestration technology. Let's explore how Docker and Kubernetes align and how they support cloud-native computing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Docker is a containerization platform that is used to create and run software containers. A container is a collection of one or more processes, organized under a single name and identifier. A container is isolated from the other processes running within a computing environment, be it a physical computer or a virtual machine. Basically, it’s a toolkit that makes it easier, safer and faster for developers to build, deploy and manage containers. &lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Docker Engine:&lt;/strong&gt; The runtime environment that allows &lt;br&gt;
developers to build and run containers.&lt;br&gt;
• &lt;strong&gt;Docker File:&lt;/strong&gt; A simple text file that defines everything needed to build a Docker container image, such as OS network specifications and file locations. It’s essentially a list of commands that Docker Engine will run in order to assemble the image.&lt;br&gt;
• &lt;strong&gt;Docker Compose:&lt;/strong&gt; A tool for defining and running multi-container applications. It creates a YAML file to specify which services are included in the application and can deploy and run containers with a single command via the Docker CLI.&lt;/p&gt;

&lt;p&gt;Other Docker API features include the ability to automatically track and roll back container images, use existing containers as base images for building new containers and build containers based on application source code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is an open-source container orchestration platform for scheduling and automating the deployment, management and scaling of containerized applications. Containers operate in a multiple container architecture called a cluster. A Kubernetes cluster includes a container designated as a master node that schedules workloads for the rest of the containers or worker nodes in the cluster. The master node determines where to host applications (or Docker containers), decides how to put them together and manages their orchestration. By grouping containers that make up an application into clusters, Kubernetes facilitates service discovery and enables management of high volumes of containers throughout their lifecycles.&lt;br&gt;
Key Kubernetes functions include the following:&lt;br&gt;&lt;br&gt;
• &lt;strong&gt;Deployment:&lt;/strong&gt; Schedules and automates container deployment across multiple compute nodes, which can be VMs&lt;br&gt;
• &lt;strong&gt;Service Discovery and Load Balancing:&lt;/strong&gt; Exposes a container on the internet and employs load balancing when traffic spikes occur to maintain stability.&lt;br&gt;
• &lt;strong&gt;Auto-scaling features:&lt;/strong&gt; Automatically starts up new containers to handle heavy loads, whether based on CPU usage, memory thresholds or custom metrics.&lt;br&gt;
• &lt;strong&gt;Self-healing capabilities:&lt;/strong&gt; Restarts, replaces or reschedules containers when they fail or when nodes die, and kills containers that don’t respond to user-defined health checks.&lt;br&gt;
• &lt;strong&gt;Automated rollouts and rollbacks:&lt;/strong&gt; Roll out application changes and monitors application health for any issues, rolling back changes if something goes wrong.&lt;/p&gt;

&lt;p&gt;Although Kubernetes and Docker are distinct technologies, they are highly complementary and make a powerful combination. Docker provides the containerization piece, enabling developers to easily package applications into small, isolated containers via the command line. Developers can then run those applications across their IT environment, without having to worry about compatibility issues. If an application runs on a single node during testing, it will run anywhere.&lt;br&gt;
When demand surges, Kubernetes provides orchestration of Docker containers, scheduling and automatically deploying them across IT environments to ensure high availability. In addition to running containers, Kubernetes provides the benefits of load balancing, self-healing and automated rollouts and rollbacks. Plus, it has a graphical user interface for ease of use.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>dataengineering</category>
      <category>softwareengineer</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
