<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Praveen Sambu</title>
    <description>The latest articles on DEV Community by Praveen Sambu (@praveensambu).</description>
    <link>https://dev.to/praveensambu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/praveensambu"/>
    <language>en</language>
    <item>
      <title>How to Deal with Compromised access in AWS</title>
      <dc:creator>Praveen Sambu</dc:creator>
      <pubDate>Thu, 26 Jan 2023 01:50:15 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-deal-with-compromised-access-in-aws-1i8</link>
      <guid>https://dev.to/aws-builders/how-to-deal-with-compromised-access-in-aws-1i8</guid>
      <description>&lt;p&gt;Today, I would like to share an interesting situation happened in my personal AWS account and this should be a good learning too..&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw89e96yzcn4zu141ooet.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw89e96yzcn4zu141ooet.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
This can be most common use case but we should be very careful.&lt;/p&gt;

&lt;p&gt;I had only one root account and one user account under my AWS and Root account has an MFA and regular user account does not have any MFA.&lt;/p&gt;

&lt;p&gt;I use my regular user account to perform the programmatic access as it is personal account I have it as an administrator privilege's.(Not a good practice) learn to be specific on providing least privilege's access. I generally use AWS config to setup my machine to connect with any AWS services. And I am playing around to see how to solve if they are compromised to an attacker. &lt;/p&gt;

&lt;p&gt;If for any reason this config is hacked as they are in my local machine then my entire AWS account is exposed as they can read/write/access any personal information data, If this is an organization account.&lt;br&gt;
So I used set-token temporarily using &lt;strong&gt;AWS sts get-session token&lt;/strong&gt;. &lt;br&gt;
By this way we can only provide the access to AWS account for limited amount of time and once they expire we need to login to access the AWS Account. Now I Imagined if these temporary credentials are compromised how to protect AWS Account.&lt;/p&gt;

&lt;p&gt;I see we have an option to Disable or Delete a user but if this is an Account specific user and has access to production then the production system is down, if we Disable or Delete the user account. Also found that if we disable the account the temporary credentials are able to still access the AWS Services untill the token is expired. So In this case we should be aware of what the policy is assigned to that user account.(Never grant permissions which are not required).&lt;br&gt;
So In this case I updated the policy under the Administrator to "&lt;strong&gt;DENY&lt;/strong&gt;" all the access and then found that account is protected. &lt;/p&gt;

&lt;p&gt;So I want to narrate this story which I did as an exercise, so that If you see this issue in Real time do not Delete/Disable User in Rush to protect from the Incident. Please do watch watch the roles and use Cloud-Trial to analyze what kind of changes were made.  &lt;/p&gt;

</description>
      <category>resources</category>
      <category>guide</category>
      <category>research</category>
      <category>help</category>
    </item>
    <item>
      <title>AWS ledger database and its usecase</title>
      <dc:creator>Praveen Sambu</dc:creator>
      <pubDate>Sun, 14 Nov 2021 01:59:39 +0000</pubDate>
      <link>https://dev.to/praveensambu/aws-ledger-database-and-its-usecase-2ai4</link>
      <guid>https://dev.to/praveensambu/aws-ledger-database-and-its-usecase-2ai4</guid>
      <description>&lt;p&gt;A Ledger Database for Financial Applications&lt;br&gt;
I would like to explain how can we use Amazon QLDB for financial transactions based application. Its great fit for secure data store.&lt;/p&gt;

&lt;p&gt;Diagram of icon&lt;br&gt;
Amazon Quantum Ledger Database, which is also known as Amazon QLDB. This was released in September of 2019.&lt;br&gt;
So to start with, what actually is Amazon QLDB? It’s yet another fully managed and serverless database service, which has been designed as a ledger database. This has a whole host of use cases. One quick example would be for recording financial data over a period of time. QLDB would allow to maintain a complete history of accounting and transactional data between multiple parties in an immutable, transparent and cryptographic way through the use of the cryptographic algorithm, SHA-256, making it highly secure.&lt;br&gt;
This means you can rest assured that nothing has changed or can be changed through the use of a database journal, which is configured as append-only. Essentially, the immutable transaction log that records all entries in a sequenced manner over time. This service therefore negates the need for an organization to develop and implement their own ledger applications.&lt;br&gt;
This may sound similar to blockchain technology where a ledger is also used. However, in blockchain, that ledger is distributed across multiple hosts in a decentralized environment, whereas QLDB is owned and managed by a central and trusted authority. This removes the requirement of a consensus of everyone across the network, which is required with blockchain.&lt;br&gt;
Often, ledger applications to fulfill these requirements are added to relational databases, and this quickly becomes difficult to manage since they are not immutable, which makes errors difficult to trace, especially during audits.&lt;br&gt;
I mentioned earlier that QLDB is serverless. So again, the administration of having to maintain the underlying infrastructure is removed and all scaling is managed by AWS, which includes any read and write limitations of the database.&lt;br&gt;
Amazon QLDB is great for scenarios where you can maintain an accurate record of changes requiring the utmost integrity assurance. So to help get an understanding of how QLDB is used across different industries, let me take a look at a couple of examples and use cases for the service.&lt;br&gt;
So QLDB would be a great fit within the insurance industry, which a claim by its nature can be a long winded and extensive process involving many different parties and operations over a long time period. You could implement different processes, systems and applications to track, audit and record the claim history via relational databases and custom auditing mechanisms verifying the validity of the records. However, this could all be replaced with Amazon QLDB. Using an immutable append-only framework, prevents the ability to manipulate previous data entries, which helps to prevent fraudulent activity.&lt;br&gt;
So we can see that Amazon QLDB is really about maintaining an immutable ledger with cryptographic abilities to enable the verifiable tracking of changes over time. There are many different use cases where this can be used, and I’ve just highlighted a couple here to provide more of an understanding of how this would be used.&lt;br&gt;
Let’s now take a look at some of the concepts and components that make up the service.&lt;br&gt;
Data for your QLDB database is placed into tables of Amazon ion documents. Now these ion documents have been created internally at Amazon and on open-source, self-describing data serialization format, which is a superset of JSON, JavaScript Object Notation. This means that any JSON document is also classed as a valid Amazon ion document. The document is also allowed store by structured and unstructured data.&lt;br&gt;
So going back to the tables, they all effectively compromise of a group of Amazon ion documents and their revisions. As with most documents, when a revision is made, it usually signifies a change, an update, a replacement. Basically, something changes to the document, making a revision. Now, as we know, QLDB by design maintains an audit history of all changes and so that revision is saved in addition to all previous versions of the same ion document. This journal of transactional changes allows you to easily query document history across all document iterations.&lt;br&gt;
Changes to these documents are done so via database transactions. In doing this transaction, Amazon QLDB will read data from its ledger, perform the update as required, and then save the changes to the journal. To be clear, the journal acts as an append-only transactional log and maintains the source of truth for that document and the entire history of changes to that document, ensuring that it remains immutable.&lt;/p&gt;

&lt;p&gt;Encrypted version changes stored for tracking&lt;br&gt;
Each time a change is committed to the journal, a sequence number is added to identify its place in the change history. In addition to this, an SHA-256 bit hash is used for verification purposes, which creates a cryptographic digest file of the journal. Now this identifies an encrypted signature of the changes made to your document and the history of your entire document at that point in time. This can then be used to verify the integrity of the changes made in relation to the digest file created. This helps to ensure that the data within your document has not been altered or changed in any way since it was first written to in QLDB.&lt;br&gt;
For a deeper understanding of how this whole process works, please review the following: &lt;a href="https://docs.aws.amazon.com/qldb/latest/developerguide/verification.html"&gt;https://docs.aws.amazon.com/qldb/latest/developerguide/verification.html&lt;/a&gt;&lt;br&gt;
Now we have a base understanding of the ledger itself through the use of tables and documents, including the ledger. Let me now take a look at how storage is used for QLDB.&lt;br&gt;
Amazon QLDB uses two different methods of storage, each for very different uses, these being journal storage and index storage.&lt;br&gt;
Journal storage is the storage that is used to hold the history of changes made within the ledger database. So this will hold all of the immutable changes and history to the ion documents within your table.&lt;br&gt;
Index storage on the other hand is the storage that is used to provision the tables and indexes within the ledger database and it’s optimized for querying.&lt;br&gt;
With QLDB being a fully managed serverless service, this storage is managed for you and there are no specifications to select or make during the creation of your ledger database. In the next lecture, I will show you how simple it is to create a ledger and load some sample data into the database.&lt;br&gt;
The final point I want to cover with Amazon QLDB is, is integration with Amazon Kinesis through the use of QLDB streams.&lt;br&gt;
Amazon Kinesis makes it easy to collect, process, and analyze real-time streaming data so you can get timely insights and react quickly to new information. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, your data lakes and data warehouses, or build your own real-time applications using this data. Kinesis enables you to process and analyze data as it arrives and respond in real-time instead of having to wait until all your data is collected before the processing can begin.&lt;br&gt;
Using QLDB streams, you are able to capture all changes that are made to the journal and feed this information into an Amazon Kinesis data stream in near real-time. This allows you to architect solutions whereby other AWS services could process this data from Kinesis to provide additional benefit. For example, this is a great way to implement event-driven architectures. Event-driven architectures are triggered by events that occur within the infrastructure.&lt;br&gt;
So in this case, suppose your ledger contained financial accounts recording transactions and in this instance, an event could be a Lambda function that triggers an SNS notification to an account owner when a finance balance drops below a certain threshold following an update that has been made to the journal.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>serverless</category>
      <category>programming</category>
    </item>
    <item>
      <title>9 Types of AWS Databases and its purposes</title>
      <dc:creator>Praveen Sambu</dc:creator>
      <pubDate>Sat, 06 Nov 2021 15:01:14 +0000</pubDate>
      <link>https://dev.to/praveensambu/9-types-of-aws-databases-and-its-purposes-3pp1</link>
      <guid>https://dev.to/praveensambu/9-types-of-aws-databases-and-its-purposes-3pp1</guid>
      <description>&lt;p&gt;9 types of managed database services available from AWS. They fall into two primary categories&lt;br&gt;
 1) relational &lt;br&gt;
 2) NoSQL databases.&lt;/p&gt;

&lt;p&gt;The Amazon Relational Database Service is the managed service providing relational databases.&lt;br&gt;&lt;br&gt;
The engines include Amazon Aurora, MySQL, MariaDB, Postgres, Microsoft SQL Server, and Oracle.&lt;/p&gt;

&lt;p&gt;The managed NoSQL database offerings include Key-Value stores, Document stores, In-Memory Databases, Graph stores, Time Series stores, Ledger databases, and Search databases.&lt;/p&gt;

&lt;p&gt;Amazon DynamoDB is a key-value store.  Data is accessed using a key that retrieves a value.  It’s a binary operation.  Data is returned or it isn’t.&lt;/p&gt;

&lt;p&gt;Amazon DocumentDB is a document database.  Document databases store semi-structured data and the data structure is embedded in the document, itself.  Data is accessed using a key but the value, because it can have a structure, can also be queried to return specific information.&lt;/p&gt;

&lt;p&gt;Amazon ElastiCache is an in-memory store.  The primary use case for an in-memory store is caching. A cache improves database performance by serving often requested data from memory instead of from a disk or from a memory-intensive calculation. &lt;/p&gt;

&lt;p&gt;Amazon Neptune is a graph database.  Graph databases store and analyze the relationships between things.  Graph databases can visualize people in terms of a social network but they can also be used to see how systems and processes are connected.&lt;/p&gt;

&lt;p&gt;Amazon Timestream is a Time Series database.  Time series databases answer questions about trends and events.  While it is a type of key-value store with the time as the key, a time series database looks at ranges of data points to calculate answers.&lt;/p&gt;

&lt;p&gt;Amazon Quantum Ledger Database is a ledger database.  A ledger database uses cryptographic controls to ensure that the data stored is immutable.  Records are not edited.  Instead, when information changes, new versions of the record are created.  It also uses a blockchain to ensure data integrity.  When a hash is created to verify data integrity it uses the data along with the hash from the previous record.  If the chain is tampered with, the chain will be broken.&lt;/p&gt;

&lt;p&gt;Amazon Elasicsearch Service is a search database.  Search databases create indexes to help people find important information.  Web searching is a common application but searching is also done in product catalogs, enterprise documentation, and in content management systems.&lt;/p&gt;

&lt;p&gt;Relational databases are best for transactional workloads that have highly-structured data and require ACID compliance.  ACID compliance means that transactions will take a database from one stable state to another stable state.  &lt;/p&gt;

&lt;p&gt;Relational databases can scale but, when they do, it is done vertically.  They’re made bigger by adding more CPU, memory, or expanding existing storage.&lt;/p&gt;

&lt;p&gt;Relational databases are often used for online transactional processing applications.  These types of applications usually work on small amounts of data per transaction to record an exchange of goods or services.  &lt;/p&gt;

&lt;p&gt;Relational databases use a schema to define the structure of the data stored.  Schemas are built based on reporting, data validation, and compliance requirements.  The database cannot be used until the design has been completed and implemented.&lt;/p&gt;

&lt;p&gt;Relational databases report on and manage known processes.&lt;/p&gt;

&lt;p&gt;NoSQL databases are a family of databases that share certain characteristics.  They usually scale horizontally by adding compute notes.  They do not, generally, require a schema to define data.  Data is usually semi-structured or unstructured.  &lt;/p&gt;

&lt;p&gt;The term NoSQL originally meant that a programming language other than SQL was used to access data.  This has been expanded to mean “Not Only SQL” because some databases in the NoSQL family can use a modified version of SQL to access data.&lt;/p&gt;

&lt;p&gt;NoSQL databases are often used for online analytical processing workloads.  OLAP workloads answer questions that are unknown.  That is, where a relational database report might show the number of items sold in a given month, an analytic application might reveal the trends as to why certain items sold while others didn’t.&lt;/p&gt;

&lt;p&gt;NoSQL databases use unstructured or semi-structured data.  This means that developers can write code using a NoSQL database without having to wait until the design has been completed.&lt;/p&gt;

&lt;p&gt;Data drives business.  Data drives innovation.  Data can be as unique as a fingerprint or as ubiquitous as water.&lt;/p&gt;

&lt;p&gt;The cloud has the promise of agility, scalability, and elasticity.  Agility is about changing to meet needs, scalability means that growth can happen when needed, and elasticity is about turning the lights off when leaving a room.&lt;/p&gt;

&lt;p&gt;Picking the correct database to manage data will take some effort.  It could be that several database types are needed.  &lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>cloud</category>
    </item>
    <item>
      <title>What is AWS Step Function?</title>
      <dc:creator>Praveen Sambu</dc:creator>
      <pubDate>Fri, 22 Oct 2021 18:43:49 +0000</pubDate>
      <link>https://dev.to/praveensambu/what-is-aws-step-function-267f</link>
      <guid>https://dev.to/praveensambu/what-is-aws-step-function-267f</guid>
      <description>&lt;p&gt;AWS Step Functions provides server less orchestration for modern applications. Orchestration centrally manages a workflow by breaking it into multiple steps, adding flow logic, and tracking the inputs and outputs between the steps. As your applications run, Step Functions maintains the application state, tracking exactly which workflow step your application is in, and stores an event log of data that is passed between application components. That means if the workflow is interrupted for any reason, your application can pick up right where it left off.&lt;/p&gt;

&lt;p&gt;What is State?&lt;/p&gt;

&lt;p&gt;Step Functions is based on state machines and tasks. A state machine is a workflow. A task is a state in a workflow that represents a single unit of work that another AWS service performs. Each step in a workflow is a state.&lt;/p&gt;

&lt;p&gt;States are elements in your state machine. A state is referred to by its name, which can be any string but it must be unique within the scope of the entire state machine. Individual states can make decisions based on their input, perform actions, and pass output to other states.&lt;/p&gt;

&lt;p&gt;States can provide a variety of functions in your state machine: &lt;/p&gt;

&lt;p&gt;Perform some work in your state machine.&lt;br&gt;
Make a choice between branches of activity.&lt;br&gt;
Stop an activity with a failure or success.&lt;br&gt;
Simply pass its input to its output or inject some fixed data.&lt;br&gt;
Provide a delay for a certain amount of time or until a specified time or date.&lt;br&gt;
Begin parallel branches of activity.&lt;br&gt;
Dynamically iterate steps.&lt;/p&gt;

&lt;p&gt;Different types of states&lt;/p&gt;

&lt;p&gt;AWS Step Functions supports eight different types of states. Choose the image to learn about each state and choose the arrows to go through each state in the stack.&lt;/p&gt;

&lt;p&gt;Why Use AWS Step Functions?&lt;br&gt;
AWS Step Functions helps with any computational problem or business process that can be subdivided into a series of steps. Application development is faster and more intuitive with Step Functions, because you can define and manage the workflow of your application independently from its business logic. Making changes to one does not affect the other. You can easily update and modify workflows in one place, without having to struggle with managing, monitoring, and maintaining multiple point-to-point integrations. Step Functions frees your functions and containers from excess code, so you can write your applications faster and make them more resilient and easier to maintain.&lt;/p&gt;

&lt;p&gt;Step Functions features&lt;/p&gt;

&lt;p&gt;Step Functions is a AWS managed serverless service. To learn more about the serverless service features, such as scaling, availability, pay per use, and security and compliance, choose each of the four tabs.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How to Set Up EFS Mount Helper</title>
      <dc:creator>Praveen Sambu</dc:creator>
      <pubDate>Thu, 01 Jul 2021 03:25:41 +0000</pubDate>
      <link>https://dev.to/praveensambu/how-to-set-up-efs-mount-helper-3adk</link>
      <guid>https://dev.to/praveensambu/how-to-set-up-efs-mount-helper-3adk</guid>
      <description>&lt;p&gt;EFS offers two methods to connect your Linux-based EC2 instance to your EFS file system. Both use a process called mounting whereby you mount a target to the EFS file system on your instance. The original method available with EFS used the standard Linux NFS client to perform the mount. Since then, a new method has been developed, and this newer method is now the preferred option, and this uses the EFS mount helper.&lt;br&gt;
The EFS mount helper is a utility that has to be installed on your EC2 instance. This utility has been designed to simplify the entire mount process by using predefined recommended mounting options that are commonly used within the NFS client. It also provides built-in login capabilities to help with any troubleshooting that might be required and are stored in the following location:&lt;br&gt;
/var/log/amazon/efs&lt;br&gt;
In addition to mounting an EFS file system to running instances, you can also use the EFS mount helper to automatically connect to EFS during the boot process, as well as by editing the /etc/fstab configuration file. Before using the EFS mount helper to connect to your EFS file system from your EC2 instances, there are a couple of prerequisites required to be in place. First and foremost, you need to ensure that you have created and configured your EFS file system, in addition to your EFS mount targets. You must have an EC2 instance running with the EFS mount helper installed, and this instance will be used to connect to the EFS file system.&lt;br&gt;
The instance must also be in the VPC and configured to use the Amazon DNS servers with DNS hostnames enabled. You must have a security group configured allowing the NFS file system NFS access to your Linux instance, and you must also be able to connect to your Linux instance. I now want to provide a quick demonstration on how to create an EFS file system from within the AWS Management Console.&lt;/p&gt;

&lt;p&gt;Set up&lt;br&gt;
Navigate to EC2 and go to security group.&lt;br&gt;
Create a security group and select the VPC and add the inbound rule of NFS, As NFS will need to be accessed by EC2 instances.&lt;/p&gt;

&lt;p&gt;Configuring Security group&lt;br&gt;
Now let’s search for EFS in AWS console, Select Create Elastic File System.&lt;br&gt;
There are 3 steps to be followed here:&lt;br&gt;
Select the VPC to which the EFS should be associated.&lt;br&gt;
Select the mount targets which are in each AZ and select public subnet and security group which we created in before step.&lt;br&gt;
Add tags , life cycle management to move from standard to Infrequent, through put mode, Performance mode, enable encryption. These all options are pre defined and optional. Can be changed as per your requirement.&lt;br&gt;
Once created we can see the DNS Name of the file system access when you open EFS you created in detail.&lt;br&gt;
Now let’s Mound the EFS created to EC2 instances.&lt;/p&gt;

&lt;p&gt;This information is also available in your EFS screen and select how to configure&lt;/p&gt;

&lt;p&gt;Now ssh to your 2 instances and you do the bash command as shown in the screen&lt;br&gt;
This command installs efs on EC2: sudo yum install -y amazon-efs-utils.&lt;br&gt;
This command creates the directory in EC2 instances: sudo mk dir efs.&lt;br&gt;
This command can be used to mount the EFS to EC2: sudo mount -t efs fs-obc672co.&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
  </channel>
</rss>
