<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Neeraj Iyer</title>
    <description>The latest articles on DEV Community by Neeraj Iyer (@neeraj_iyer_980804515a5da).</description>
    <link>https://dev.to/neeraj_iyer_980804515a5da</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/neeraj_iyer_980804515a5da"/>
    <language>en</language>
    <item>
      <title>Amazon Quicksight - Best practices Part 2</title>
      <dc:creator>Neeraj Iyer</dc:creator>
      <pubDate>Sat, 27 Dec 2025 17:41:27 +0000</pubDate>
      <link>https://dev.to/neeraj_iyer_980804515a5da/amazon-quicksight-best-practices-part-2-295c</link>
      <guid>https://dev.to/neeraj_iyer_980804515a5da/amazon-quicksight-best-practices-part-2-295c</guid>
      <description>&lt;p&gt;This is a part 2 of Amazon Quicksight best practices that a organization should follow to be consistent across all teams and follow a same process of building and deploying Quicksight assets&lt;/p&gt;

&lt;p&gt;Link to part 1 - &lt;a href="https://builder.aws.com/content/2rfpwl1gNpQIMhmTGuJxq0E9sl2/amazon-quicksight-best-practices-part-1" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data preparation&lt;/strong&gt;&lt;br&gt;
Data preparation is a very critical process for creating accurate and insightful analyses in Quicksight.&lt;br&gt;
Some best practices that you should follow while preparing your data for analysis are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Query mode&lt;/strong&gt; -&lt;br&gt;
Ensure you have a data arriving from your data source either through a direct or SPICE based query mode. If you need to view your data instantly then consider using direct query. But if you are looking to use complex calculations and you are okay to cache your data then consider using SPICE mode. You will have to refresh your data manually or can set up a automated trigger to refresh your data in frequent intervals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Have an understanding of your data and optimize it&lt;/strong&gt;:&lt;br&gt;
Understand your data well and if you need to optimize it for performance do it. If you do not want to get all the data, you can filter out the data in source and get only the required data in. Consider pre aggregating (sums, averages, max, min) your data at the source itself which can reduce the data volume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Joining tables and establishing relationships&lt;/strong&gt; –&lt;br&gt;
If you have to join multiple tables, make sure you establish appropriate relationships between them. Use the correct join type (inner, full, left, right) based on the relationship between the datasets. Prefer to use joins instead of cartesian products as it will lead to performance issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating data dictionaries and process flow&lt;/strong&gt;:&lt;br&gt;
It’s a good practice to document the entire process flow using tools like Lucid chart and also document all the fields, sources ,relationships and different transformations applied in your dataset which will make it easier for others new to the project to understand the data better.&lt;br&gt;
&lt;strong&gt;Quicksight One Big Table (OBT) data model&lt;/strong&gt; – Quicksight follows a One Big Table data model approach and not a star schema approach. Quicksight has more features in their future roadmap where they can join multiple facts.&lt;br&gt;
&lt;strong&gt;Ensure data is clean and normalized&lt;/strong&gt; – Make sure if you need to standardize date, address formats, remove duplicates from your data, or handle missing values.&lt;br&gt;
&lt;strong&gt;Apply row-level or column level security to your data&lt;/strong&gt; – If you have to restrict your data by users then you can either apply Row-level security while preparing data or after you have a dataset created.&lt;/p&gt;

&lt;p&gt;Apply your data transformations in Quicksight itself or in your data layer itself - best practice is decide based on your use case and data whether your data will require complex transformations or basic transformations. Performance is an important consideration here. Some transformations will not perform well in Quicksight while comparing it with a transformation in the data layer. For complex ETL tasks you can consider using AWS Glue before bringing the data into Quicksight. If you are comfortable using SQL and think that your transformations are simple, then you can use customer SQL queries within Quicksight to filter, join and transform data as it is imported. So, make a choice of where the transformation should be done based on your data, performance consideration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sharing analyses/dashboards/datasets&lt;/strong&gt;:&lt;br&gt;
Sharing analyses, dashboards &amp;amp; datasets in Amazon Quicksight is straightforward and Quicksight ensures you share your resources with right people who should be having access to that specific data which maintains security of data.&lt;br&gt;
Some best practices that should be followed while sharing analyses/dashboards are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Place your artifacts in folders&lt;/strong&gt;&lt;br&gt;
Organize your artifacts like analyses, dashboards and datasets into folders or shared folders so that is easy to share it with other users who are also using the same artifacts. Name your folder appropriately so that it is easily recognizable by other team members.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provide appropriate access permissions&lt;/strong&gt;&lt;br&gt;
You can grant a viewer or co-owner permission with your users. Based on the use case and user role provide a viewer or co-owner permission. Co-owner permission should be provided to the least number of users so that not everyone is able to modify the artifacts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Utilize groups to share access&lt;/strong&gt;&lt;br&gt;
Instead of sharing your artifacts with individual users you can share it with a group which makes it easier on management of users. Once you have grouped them you can share access to your resources or artifacts easily.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Utilize Quicksight embedding feature&lt;/strong&gt;&lt;br&gt;
If you want to share access to users within your application, then you can embed your dashboard with your web application or portal. This is secure and follows security best practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Follow naming conventions&lt;/strong&gt;&lt;br&gt;
As mentioned previously, follow naming conventions for folders, groups and file naming for all artifacts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sharing dashboard using links&lt;/strong&gt;&lt;br&gt;
If you are sharing your dashboards to external users via link, then make sure you are sharing it with the right person. They will not have access to the data as they will be readers of the dashboard, but you might want to restrict who you want your dashboards visible to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Utilize email reporting&lt;/strong&gt;&lt;br&gt;
Quicksight has a email reporting feature where you can automatically send dashboards or schedule to send snapshots of the dashboards. You can share this with all the users who are intended to view the dashboard or underlying data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Utilize Quicksight’s recent and favorites options&lt;/strong&gt;&lt;br&gt;
Users can add their dashboards/analyses to favorites from where they can easily access their resources back again. However, It is always better to use folders/shared folders to share it among different users.&lt;br&gt;
**&lt;br&gt;
Audit resource permissions periodically-**&lt;br&gt;
Make sure your resources are audited frequently. There might be some stale analysis/dashboards that is not being used anymore or was created for testing purposes. That is why it is encouraged to name the resources appropriately so that it is easy to either keep it as is or purge if not required. If there are users in the account that have left the organization, make sure their resources are shared among other users and that user is deleted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security best practices&lt;/strong&gt;&lt;br&gt;
Amazon Quicksight provides several security features that will not only protect your data but also maintain confidentiality and integrity while accessing different AWS resources and external resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contact your security administrator to get you access to Quicksight&lt;/strong&gt;&lt;br&gt;
We have multiple AWS accounts and different accounts have Quicksight accounts. So, make sure you know which project you are working on and appropriately gain access to that account. Once you have access to Microsoft Entra, you should be able to see your AWS account and then navigate to the right account that has your Quicksight account. Ensure what permissions you need as a Quicksight user. Most users will not need administrator access so ensure which permissions you will need based on your role in the project. Your administrator will assign appropriate IAM users/role/policies that you will need to access different resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember the principle of least privilege&lt;/strong&gt;&lt;br&gt;
AWS follows a principle of least privilege where you provide your user least privilege to access different resources. In Quicksight you should be provided the least privilege based on your role. Admin access should be limited to users and most of the users should be authors (one who create analysis/dashboards) and rest of the users who will just consume the dashboard- view the dashboard as a user should be readers. Sharing of resources should also follow the same principle of least privilege.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor access using Cloud trail&lt;/strong&gt;&lt;br&gt;
You can enable AWS CloudTrail to log all API calls that are made by or on behalf of Quicksight users which can help audit user activity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use IAM roles to secure data sources&lt;/strong&gt;&lt;br&gt;
Use IAM roles to access other AWS services like S3, RDS, Redshift, Aurora, Athena. With this you can grant only specific permissions to users without exposing all the permissions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rotate credentials regularly&lt;/strong&gt;&lt;br&gt;
You can utilize AWS Secrets managers to securely store and rotate credentials. Rotate access credentials regularly such as IAM keys, database credentials, API keys that will be used in Quicksight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implement Row-level security&lt;/strong&gt;&lt;br&gt;
If you want to restrict access to your dataset to specific users or group, then you can apply row-level security. With this restriction only users authorized to view the data can view.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enable multi-factor authentication&lt;/strong&gt;&lt;br&gt;
Have your users use multi-factor authentication for an additional layer of authentication or security when trying to access Quicksight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring using CloudWatch&lt;/strong&gt;&lt;br&gt;
Use Amazon CloudWatch to monitor the performance, health and usage of Quicksight. If you need to set up some metrics or alarms you can do so with CloudWatch to detect any anomalies or security breaches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Private connections to your database through VPC&lt;/strong&gt;&lt;br&gt;
You can use Virtual Private Cloud (VPC) to access private data sources from Quicksight without exposing your data to public internet. To enhance security, you can establish VPC endpoints to connect to data sources like RDS/Redshift in your VPC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implement security best practices for embedding&lt;/strong&gt;&lt;br&gt;
While embedding Quicksight dashboards or the authoring experience to applications, make sure you use a secure session for embedding by generating secure session urls so that you do have unauthorized access to your embedded dashboards. It is always advised to use IAM roles and session based permissions for embedding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enable data encryption&lt;/strong&gt;&lt;br&gt;
If you have a mission critical data that should not be exposed to public then you can encrypt your data using AWS Key management services (KMS) which will encrypt your data at rest as well data in transit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Networking best practices&lt;/strong&gt;&lt;br&gt;
While setting up Quicksight or any dataset in Quicksight, networking is a very important aspect that cannot be ignored to ensure that you have secure, efficient and reliable access. Some best practices that should be followed are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Direct connectivity to data sources&lt;/strong&gt;&lt;br&gt;
If you have data sources like RDS, Redshift or on-prem databases within Virtual Private Cloud (VPC) then you need to configure Quicksight to access these resources via VPC connections&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Private VPC connections&lt;/strong&gt;&lt;br&gt;
You can use Virtual Private Cloud (VPC) to access private data sources from Quicksight without exposing your data to public internet. To enhance security, you can establish VPC endpoints to connect to data sources like RDS/Redshift in your VPC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Latency and Performance&lt;/strong&gt;&lt;br&gt;
It is always good practice to connect your Quicksight to data sources in the same AWS region as Quicksight to improve performance and improve latency. You can utilize SPICE to store your data in memory so that network congestion will reduce and improve performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auditing and monitoring&lt;/strong&gt;&lt;br&gt;
You can enable VPC flow logs to monitor the traffic between Quicksight and your data sources, which will help in identifying network issues. To track your Quicksight API calls you can use AWS CloudTrail and CloudWatch for monitoring and setting up alarms based on metrics you define.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IP Addressing&lt;/strong&gt;&lt;br&gt;
If you would like to use fixed IP address for firewall configurations then you can refer to this document that provides the AWS IP ranges - &lt;a href="https://docs.aws.amazon.com/quicksight/latest/user/regions.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/quicksight/latest/user/regions.html&lt;/a&gt;. Based on this - You can regularly update your firewall rules incase there are any changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Groups&lt;/strong&gt;&lt;br&gt;
As a rule of thumb you have to follow the principle of lease privilege so you will be allowing only necessary inbound and outbound traffic to and from Quicksight which you configure in your security groups. You must configure security groups to allow Quicksight access to data sources. For example, if you have a Postgres database and want Quicksight to access that database then you must enable port 5432 for Postgres, 5439 for redshift and 3306 for MySQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Folder/shared folders for group of users&lt;/strong&gt;&lt;br&gt;
Using folders or shared folders in Amazon Quicksight is crucial as it helps in organizing and also sharing it with multiple users/groups of users. With this feature you can collaborate with your team.&lt;br&gt;
A folder in Quicksight is for users to organize their resources and shared folders is used to collaborate with multiple users.&lt;/p&gt;

&lt;p&gt;Some best practices to follow while using folder or shared folders are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared folder naming conventions&lt;/strong&gt;&lt;br&gt;
As mentioned earlier please follow a folder naming convention so that it is descriptive enough for other users to follow if you are using shared folders. Some examples are Sales_dashboards, marketing_resources, HR_resources, Insurance_resources.&lt;/p&gt;

&lt;p&gt;Another use case of folders is to segregate the resources by different environments. If you have a dev , test and prod environment then you should have 3 folders to place each of your resources in respective folders. If you do not have the ability to create shared folders then reach out to your admin to get one created. By segregating by environments, you will not accidentally push development updates directly to prod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Permissions to folders&lt;/strong&gt;&lt;br&gt;
Again, follow the principle of least privilege by providing access to folders for only those who require access to it. Provide users access based on their role such as a viewer or edit permissions&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Folder organization&lt;/strong&gt;&lt;br&gt;
Organize your folders in a clear hierarchy based on your need. If you have data by date and time then organize it accordingly and if you have your resources organized by department, project or data source then organize it accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managing a shared folder&lt;/strong&gt;&lt;br&gt;
Assign a administrator for your shared folder so that they can manage access permissions for all the users who have access to the resource in that shared folder. Also audit the shared folders periodically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content organization within folders&lt;/strong&gt;&lt;br&gt;
You can organize datasets, dashboards and analyses in separate folders so that it becomes easy for a user to access them and work with them.&lt;br&gt;
Archive or purge old content&lt;br&gt;
Periodically ensure that all the resources in the folders are being used, otherwise archive them or delete them after a set time frame of 60/90 days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Notify team on any updates in shared folders&lt;/strong&gt;&lt;br&gt;
It is good practice to notify your team members if you made any updates to any Quicksight artifacts and ensure that multiple changes are not made at the same time. Hence it is a good practice to use separate folders for different environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating Calculated fields&lt;/strong&gt;&lt;br&gt;
For creating calculated fields in Quicksight, it is necessary to follow some best practices so that your calculations are accurate enough to provide you accurate insight.&lt;/p&gt;

&lt;p&gt;Some best practices to follow while creating calculated fields are-&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Naming conventions for calculated fields&lt;/strong&gt;&lt;br&gt;
Use descriptive names for calculated fields which reflect the purpose of the calculation. For example, if you calculate sales for current year to date then name it appropriately as Sales_CYTD or other examples as total_sales, total_marketingcost etc. Once it is created, you can identify a calculated field in your field set as “=”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Define a purpose for your calculation&lt;/strong&gt;&lt;br&gt;
Before diving into the actual calculation, define the purpose based on your need whether it is a new field that you are creating, will that calculated field be used to filter data or it will transform your existing data. This will make it easier to build the calculations without errors. If you are looking to learn the syntax or get basic understanding of how to create a calculated field you can refer to this link - &lt;a href="https://docs.aws.amazon.com/quicksight/latest/user/adding-a-calculated-field-analysis.html" rel="noopener noreferrer"&gt;calculated fields&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improve performance&lt;/strong&gt;&lt;br&gt;
You should keep your calculations as simple as possible by breaking them down into smaller steps or multiple calculated fields. You can also utilize SPICE (Super-fast, Parallel, In-memory Calculation Engine) for complex calculations which will improve your performance of insights.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding your data&lt;/strong&gt;&lt;br&gt;
Before creating a calculated field, understand your data, relationships with different tables, data types of different fields. This will help you clean your data if something does not look correct or even help you to find duplicates or standardize date formats etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reuse calculations&lt;/strong&gt;&lt;br&gt;
If you have a complex calculation, you can create steps of multiple calculated fields which can be re used for any other calculations so that you do not have to create a new calculated field.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choose your data type&lt;/strong&gt;&lt;br&gt;
Make sure to choose your correct data type based on existing data. If you need to parse your existing data to string or number or data you can do so using these functions toInt(), toString(), parseDate().&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handling null values&lt;/strong&gt;&lt;br&gt;
Be very careful while creating calculated fields that have null values. If you want your data to display a 0 when there is null you have to make those calculated field updates using coalesce () or ifelse() so that you do not get an error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Validate your calculations&lt;/strong&gt;&lt;br&gt;
It is a good idea to validate your calculations by applying the calculated fields on a small subset of data before you finalize your calculations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version control&lt;/strong&gt;&lt;br&gt;
If you decide to make updates to your calculations it is a good practice to rename the existing calculations with a version number at the end before you finalize so that you do not have to re create incase you have to. An example is total_sales_v1, total_sales_v2&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation&lt;/strong&gt;&lt;br&gt;
It is good practice to document all your calculations in your data dictionary so that you have a reference to all your calculated fields and even if new members join your team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CICD capabilities- BIOPS capabilities-&lt;/strong&gt;&lt;br&gt;
Amazon Quicksight does not provide its own version control feature. But the version control and BI operations (BIOPS) can be achieved by using Quicksight CLI’s to manage datasets, analyses and dashboards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manual version control&lt;/strong&gt;&lt;br&gt;
Amazon Quicksight uses CLI commands to backup Quicksight artifacts like datasets, analyses and dashboards. You can use an AWS terminal or command prompt on your local machine to execute the CLI commands. &lt;br&gt;
The Quicksight artifacts are exported as a JSON file that can be stored locally or in a S3 bucket for backup or version control. You can also automate the CLI commands into python/php or any coding language using appropriate libraries and export them out a JSON file. Here is a reference to the Quicksight CLI commands- &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/quicksight/" rel="noopener noreferrer"&gt;CLI commands reference&lt;/a&gt;  . &lt;br&gt;
you can also move our resources from one account to another if you have different environments like Dev, Test &amp;amp; Prod. You can also use Git to store all your historical Quicksight resource if you need to refer to them in the future.&lt;br&gt;
The best practice here is that all Quicksight artifacts should be backed up locally or in a S3 bucket periodically so that you have a backup that you can always rollback to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Naming conventions for manual version control&lt;/strong&gt;&lt;br&gt;
Use consistent naming conventions for analyses, dashboards and datasets so that it gets easier to track them. Append date, version and environment to the end of the file name.&lt;/p&gt;

&lt;p&gt;Example – File name_VERSION_DATE_ENVIRONMENT&lt;/p&gt;

&lt;p&gt;Maintain a track log of all changes that will track all the updates made in every change with the owner of that change. If you have an older version of Quicksight analyses/dashboard, then you can add it to the _Archive folder. If in case in the future, you want to replicate to rollback to an older version you can do so using the archived version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automating using CI/CD pipelines&lt;/strong&gt;&lt;br&gt;
If you have a large project with multiple Quicksight accounts and multiple Quicksight resources, then you can utilize different AWS services like Lambda, CloudFormation, CodePipeline , S3 to automate the deployment of Quicksight resources and also version them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SPICE dataset version control&lt;/strong&gt;&lt;br&gt;
You can take a snapshot of your SPICE point in time dataset before you manually refresh or schedule the refresh if you need to back your SPICE data for a particular time frame. You can use this snapshot to show data in your dashboard for a particular time frame if you have such a use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User Acceptance Testing&lt;/strong&gt;&lt;br&gt;
If you have implemented a deployment pipeline to deploy your resource, then you should validate the resources before they are pushed into a different environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Different environments and access controls&lt;/strong&gt;&lt;br&gt;
If you have multiple Quicksight accounts for dev, test and prod environments then not every user needs access to all the environments. Restrict user access to specific environments using role based access and prod environment can have a read only access to view the dashboards for normal users who are not the developers of the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation&lt;/strong&gt;&lt;br&gt;
Make sure the process that is followed by your team is documented and communicated to all the team members so that they are aware of the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disaster recovery plan&lt;/strong&gt;&lt;br&gt;
Always make sure to backup your resources periodically and in case of a disaster (accidental deletion or other reasons) you always should be ready to restore your resources from your backup that can be S3 or Git or any version control that you might use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Peer reviews&lt;/strong&gt;&lt;br&gt;
Implement a peer review process for reviewing changes that are critical and before being pushed to production environment which ensure multiple people have reviewed it before finalizing it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating Visuals in Amazon Quicksight&lt;/strong&gt;&lt;br&gt;
Creating visuals is the most important purpose of any dashboard which will derive useful insights for users or your audience. Some best practices that you should follow while creating visuals in Quicksight are&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Know the purpose of your user&lt;/strong&gt;&lt;br&gt;
Different users will require data insights at different granularities. So, understand your users and select appropriate visuals. An executive will look to seek only high-level insight vs analyst/developer would look to get more detailed insights.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pick the correct visual type&lt;/strong&gt;&lt;br&gt;
After understanding your data, you should pick the correct visual type for your data. Quicksight offers different visuals and it can be found here- &lt;a href="https://docs.aws.amazon.com/quicksight/latest/user/working-with-visual-types.html" rel="noopener noreferrer"&gt;Quicksight visuals&lt;/a&gt;&lt;br&gt;
You can even use autograph where Quicksight recommends a graph for you based on your data. Make sure to after plotting a graph your data is clearly visible and it makes sense to you to derive useful insights.&lt;br&gt;
You can choose your graphs based on your requirements.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To compare different categories of data then you can use bar or column charts.&lt;/li&gt;
&lt;li&gt;To show correlation between your data points you can use scatter plots.&lt;/li&gt;
&lt;li&gt;To show geographic data you can use geospatial maps&lt;/li&gt;
&lt;li&gt;To show data density you can use heat maps&lt;/li&gt;
&lt;li&gt;To show trends over time you can use line charts.&lt;/li&gt;
&lt;li&gt;To show a fraction of the whole data you can use donut or pie charts.&lt;/li&gt;
&lt;li&gt;Title of graph&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use descriptive and clear titles and labels for each graph. Data labels should be clearly visible so make sure the data colors you choose are in contrast with text color and data labels. The title and labels should not overlap and look clunky. It should be easily readable to the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filters&lt;/strong&gt;&lt;br&gt;
If you are applying filters for your data, then make sure they are placed on the top and clearly visible. They should not be ignored because of the color or color contrasts. For more best practices on filters refer to the section in this document which describes best practices for efficient filtering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use spacing wisely&lt;/strong&gt;&lt;br&gt;
After having your graph plotted, eliminate unnecessary labels, text, and titles that you might not need. This will save space for your graph and not make your graph look cluttered. Once you have all your graphs created in 1 tab re arrange them in a logical way so that it gets easier for the reader to know what you are trying to convey. Use themes to apply correct data, text colors and font style. Also make sure you have your borders and layout styled appropriately. If you want to highlight your data or any text do that to display it and draw users’ attention. Place the important visuals on the top left and going towards right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improve performance&lt;/strong&gt;&lt;br&gt;
Calculations- reduce complex, quick response times, limit amount of data, quick load time, filter aggregations data before to reduce data volume that refreshes the dashboard every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use of colors&lt;/strong&gt;&lt;br&gt;
Apply data colors appropriately so that it looks readable and cluttered with text or without proper color contrast. If you have KPIs in your graph, then color them green to show positive trend and red for negative trends. As mentioned in the themes section, use colors suitable for all types of users including users who may be color blinded. Do not use multiple colors that will confuse the user and have him lose his focus on data insights.&lt;/p&gt;

&lt;p&gt;More best practices coming soon!&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>data</category>
      <category>visualization</category>
      <category>quicksight</category>
    </item>
    <item>
      <title>Unleashing Data Insights: Harnessing Amazon QuickSight Q's Generative BI for Transformative Analytics</title>
      <dc:creator>Neeraj Iyer</dc:creator>
      <pubDate>Sun, 29 Dec 2024 16:48:25 +0000</pubDate>
      <link>https://dev.to/neeraj_iyer_980804515a5da/unleashing-data-insights-harnessing-amazon-quicksight-qs-generative-bi-for-transformative-18lo</link>
      <guid>https://dev.to/neeraj_iyer_980804515a5da/unleashing-data-insights-harnessing-amazon-quicksight-qs-generative-bi-for-transformative-18lo</guid>
      <description>&lt;p&gt;My company Baker Tilly Digital has leveraged Amazon’s QuickSight solution for a variety of internal and external analytics use cases. With continuous advancements in technology, Amazon recently made their Amazon Q product, a Generative AI solution, generally available in Amazon QuickSight.&lt;/p&gt;

&lt;p&gt;Amazon Q in QuickSight brings together the generative AI strengths of large language models (LLMs) from Amazon Bedrock with the proven models from QuickSight to create Generative BI experiences, reducing time to insights and accelerating data analysis.&lt;br&gt;
Some of the capabilities include &lt;/p&gt;

&lt;p&gt;1) contextual answers with multi-visual Q&amp;amp;A&lt;br&gt;
2) insights with Executive Summaries&lt;br&gt;
 3) ability to build visuals and calculations quickly using natural language &lt;br&gt;
4) ability to build simple-to-share documents or presentations to articulate key insights.&lt;/p&gt;

&lt;p&gt;Research findings that provide information regarding how to create an analytics solution leveraging Amazon QuickSight and Amazon Q’s Generative AI capabilities, including technologies used, benefits/value, potential solution design and approaches and use cases.&lt;br&gt;
Functioning proof of concept analytics solution leveraging AWS QuickSight and Amazon Q.&lt;br&gt;
Final presentation that highlights key research findings, overview/demo of solution and how my team can leverage the technologies and solutions across the organization.&lt;br&gt;
Develop an understanding of Amazon QuickSight and Amazon Q technologies and solutions, their potential benefits/value and use cases.&lt;br&gt;
Understand how Generative AI technologies and solutions can be leveraged and benefit my comapny internally and provide value for our clients.'&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project approach&lt;/strong&gt;&lt;br&gt;
1.Conduct research and discovery for how to create an Amazon QuickSight solution leveraging Amazon Q, including technologies used, potential solution design and approaches and potential use cases.&lt;br&gt;
2.Define requirements and approach to create the Amazon QuickSight solution leveraging Amazon Q and design a high-level solution architecture.&lt;br&gt;
3.Implement and test the QuickSight solution including configuring/building the solution with Amazon Q, loading data into the solution, and testing/validating the solution.&lt;br&gt;
4.Demo QuickSight and Amazon Q solution to Baker Tilly employees and articulate the value of the solution, both internally and externally and present research findings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Objectives&lt;/strong&gt;&lt;br&gt;
Conduct research, define and implement a functioning proof of concept analytics solution leveraging Amazon QuickSight, Amazon Q’s Generative AI capabilities and data from our proprietary models to better understand how these technologies and capabilities can be leveraged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expected outcomes&lt;/strong&gt;&lt;br&gt;
1.Understand how to create an analytics solution leveraging Amazon QuickSight and Amazon Q’s Generative AI capabilities, including technologies used, benefits/value, potential solution design and approaches and use cases.&lt;br&gt;
2.Develop a functioning proof of concept analytics solution leveraging Amazon QuickSight and Amazon Q.&lt;/p&gt;

&lt;p&gt; &lt;br&gt;
&lt;strong&gt;Data overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92wh20hxfnny0nx3qd28.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92wh20hxfnny0nx3qd28.JPG" alt="Image description" width="800" height="327"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" alt="Image description" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generative AI overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl55a35tjxtct1zecn9zr.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl55a35tjxtct1zecn9zr.JPG" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" alt="Image description" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Q Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8320j675fyp4i9aedi0i.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8320j675fyp4i9aedi0i.JPG" alt="Image description" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" alt="Image description" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technologies used&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There were the different AWS services that we used to build this solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2byumxx0ctvdudlgc4y9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2byumxx0ctvdudlgc4y9.JPG" alt="Image description" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" alt="Image description" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-level Solution architecture&lt;/strong&gt;&lt;br&gt;
Here is the architecture flow of our entire solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhncfzuptrlynzz9zygs7.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhncfzuptrlynzz9zygs7.JPG" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" alt="Image description" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommendations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The data needs to be preprocessed prior to working on the data as it could improve your insights&lt;br&gt;
Increased usage of Q will lead to development of best practices techniques for more efficient generation &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" alt="Image description" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next Steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These are the next steps that we think we should be focusing on next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh9kj8xvo933qwfw9ak0.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh9kj8xvo933qwfw9ak0.JPG" alt="Image description" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyp8ux8vr5m9y3vwsf0d.JPG" alt="Image description" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>amazonquicksight</category>
      <category>analytics</category>
      <category>businessintelligence</category>
    </item>
    <item>
      <title>Amazon Quicksight vs Microsoft PowerBI</title>
      <dc:creator>Neeraj Iyer</dc:creator>
      <pubDate>Mon, 15 Jan 2024 15:03:58 +0000</pubDate>
      <link>https://dev.to/neeraj_iyer_980804515a5da/amazon-quicksight-vs-microsoft-powerbi-32pj</link>
      <guid>https://dev.to/neeraj_iyer_980804515a5da/amazon-quicksight-vs-microsoft-powerbi-32pj</guid>
      <description>&lt;p&gt;&lt;strong&gt;Requirement of data visualization&lt;/strong&gt;&lt;br&gt;
•In the face of a growing volume of data, it is essential to focus on the pertinent information.&lt;br&gt;
•Prioritize the examination of relevant data.&lt;br&gt;
•Opt for a format that allows for quick and easy digestion of information.&lt;br&gt;
•Helps understand the data well and communicate the insight.&lt;br&gt;
•Identify errors and inaccuracies in data quickly.&lt;br&gt;
•Quick decision making and better decisions.&lt;br&gt;
•Promotes storytelling to the audience.&lt;br&gt;
•Optimize and instantly retrieve data.&lt;br&gt;
•Achieve business goals by exploring business insights.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gartner Magic Quadrant for analytics and BI platforms&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxb1sf29qmyrh1rle0jn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxb1sf29qmyrh1rle0jn.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the gartner quadrant it is clearly visible how Quicksight has moved from Niche players in 2022 to Challengers in 2023. Quicksight is making a lot of impact in the BI world with its new features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Amazon Quicksight?&lt;/strong&gt;&lt;br&gt;
Amazon Quicksight is a Unified BI to replace legacy solutions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7cmryaixs9wrvigdfln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7cmryaixs9wrvigdfln.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Quicksight is a fast, easy-to-use cloud-based, serverless BI service designed for easy deployment to hundreds of thousands of users. As a fully-managed serverless application, there’s no need to buy, manage and scale servers and no software to deploy and upgrade. QS is deeply integrated with AWS data sources, allowing companies to quickly deploy secure and scalable BI with their data on AWS. QS also provides connectivity to an ever-growing list of non-AWS sources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Power BI?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20ygq1dqi1ya4r4vfczq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20ygq1dqi1ya4r4vfczq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Power BI is Microsoft’s self-service BI platform for both on premise and cloud-based data.&lt;br&gt;
•Fast &amp;amp; easy access to data&lt;br&gt;
•Data discovery and exploration&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Power BI features&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get data&lt;/strong&gt;&lt;br&gt;
•Connect to multiple data sources both on-premise and cloud&lt;br&gt;
•Transform and clean data for analysis&lt;br&gt;
•Live connections as well as imported&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analyze&lt;/strong&gt;&lt;br&gt;
•Create a data model&lt;br&gt;
•In-memory engine&lt;br&gt;
•Create measures using DAX&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visualize&lt;/strong&gt;&lt;br&gt;
•Create reports using countless visuals with a drag and drop canvas&lt;br&gt;
•Look at data across multiple interactive visualizations&lt;br&gt;
•Create bookmarks and custom navigation to tell your data story&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Publish&lt;/strong&gt;&lt;br&gt;
•Publish to Power BI cloud service&lt;br&gt;
•Auto data refresh&lt;br&gt;
•Package multiple reports in apps&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collaborate&lt;/strong&gt;&lt;br&gt;
•Grant access on multiple devices&lt;br&gt;
•Import dashboards and visuals directly to services like PowerPoint and Teams&lt;br&gt;
•Connect to your data models via Excel and Power BI desktop&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does Amazon Quicksight compare to Microsoft PowerBI?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxisuti697mscofbn1j3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxisuti697mscofbn1j3h.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2hqd3j44m56pu0903xn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2hqd3j44m56pu0903xn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gthgq4ko3mmqr6gbfdm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gthgq4ko3mmqr6gbfdm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlfoovm6m3d3bxsuxa2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlfoovm6m3d3bxsuxa2q.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3v03rfnqnqed9wc1x0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3v03rfnqnqed9wc1x0p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generative AI in Quicksight&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Generative AI in Quicksight using Amazon Bedrock&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Natural language visual creation&lt;/strong&gt;&lt;br&gt;
Use vague or precise language to generate visuals&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visual fine-tuning&lt;/strong&gt;&lt;br&gt;
Tailor visuals by describing the formatting changes in natural language&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick calculations&lt;/strong&gt;&lt;br&gt;
Accelerate analysis by creating calculations without looking up or learning specific syntax.&lt;/p&gt;

&lt;p&gt;Below are the images showing how Generative BI capabilities can be used in Quicksight. you can ask Quicksight a question and Quicksight can answer you back with a visual. you can fine tune your visual and add it to your analysis. you can even create calculated fields the same way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04wkresuxhol2nsf85w7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04wkresuxhol2nsf85w7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8opxlmyme0mh9y1mh78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8opxlmyme0mh9y1mh78.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsf6v20mm477zazoqpt3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsf6v20mm477zazoqpt3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>quicksight</category>
      <category>dataengineering</category>
      <category>analytics</category>
      <category>developer</category>
    </item>
    <item>
      <title>Amazon Quicksight- Automating asset deployment between environments using Python</title>
      <dc:creator>Neeraj Iyer</dc:creator>
      <pubDate>Thu, 28 Dec 2023 21:09:03 +0000</pubDate>
      <link>https://dev.to/neeraj_iyer_980804515a5da/amazon-quicksight-automating-asset-deployment-between-environments-using-python-1fbj</link>
      <guid>https://dev.to/neeraj_iyer_980804515a5da/amazon-quicksight-automating-asset-deployment-between-environments-using-python-1fbj</guid>
      <description>&lt;p&gt;Quicksight previously had offered variety of APIs to move assets between different accounts for example between dev, test and prod AWS accounts. To make things quicker and easier they have introduced new API's using a concept called Asset as Bundle. So all our Quicksight assets namely Datasets, Analysis &amp;amp; Dashboards and this also includes themes, Vpc configurations are packed together as a bundle and that makes it easy to move those assets between different accounts without having to move every asset separately which was the case previously.&lt;/p&gt;

&lt;p&gt;you could use any programming language to automate the &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/quicksight/"&gt;Quicksight APIs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The new APIs allows you to interact with all the assets in a lift-and-shift manner for deployment across QuickSight accounts. These APIs can also be used for backing up assets and restoring them when needed. BIOps teams will benefit from this capability allowing them to automate Quicksight assets backup and deployment&lt;/p&gt;

&lt;p&gt;The two types of APIs are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Export APIS&lt;/li&gt;
&lt;li&gt;Import APIS&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Export APIS&lt;/strong&gt;-  This can be used to initiate, track and describe export jobs. When you initiate the export job it creates a bundle file as a zip file (.qs extension) which basically has all your quicksight assets. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;StartAssetBundleExportJob&lt;/strong&gt; - Asynchronous API to export an asset bundle file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DescribeAssetBundleExportJob&lt;/strong&gt; -  Synchronous API to get the status of your export job. When successful, this API call response will have a presigned URL to fetch the asset bundle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ListAssetBundleExportJobs&lt;/strong&gt; -  Synchronous API to list past export jobs. The list will contain both finished and running jobs from the past 15 days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Import APIS&lt;/strong&gt;-  his can be used to initiate, track and describe import jobs. These apis can be used on the target quicksight account.&lt;/p&gt;

&lt;p&gt;StartAssetBundleImportJob - Asynchronous API to import an asset bundle file.&lt;/p&gt;

&lt;p&gt;DescribeAssetBundleImportJob - Synchronous API to get the status of your import job.&lt;/p&gt;

&lt;p&gt;ListAssetBundleImportJobs- Synchronous API to list past import jobs. The list will contain both finished and running jobs from the past 15 days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps for Export job:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use the StartAssetBundleExportJob API and export assets into a bundle file.&lt;/li&gt;
&lt;li&gt;Use DescribeAssetBundleExportJob API to view the status and presigned URL that you will use to put it into a S3 bucket.&lt;/li&gt;
&lt;li&gt;Place the bundle file in a S3 bucket.&lt;/li&gt;
&lt;li&gt;Use ListAssetBundleExportJobs API to list what assets have been exported.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Steps for Import job:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use the StartAssetBundleImportJob API and get the assets from S3 bucket overriding the source details.&lt;/li&gt;
&lt;li&gt;Use DescribeAssetBundleImportJob API to view the status of the import.&lt;/li&gt;
&lt;li&gt;Use ListAssetBundleImportJobsAPI to list what assets have been imported.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Python code sample to automate assets deployments:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;import boto3&lt;br&gt;
import re&lt;br&gt;
import json&lt;br&gt;
import time&lt;br&gt;
import requests&lt;/p&gt;

&lt;p&gt;SourceAccountID=&lt;br&gt;
SourceDashboardName=&lt;br&gt;
SourceDashboardId=&lt;/p&gt;

&lt;p&gt;SourceAnalysisName=&lt;br&gt;
SourceAnalysisId=&lt;br&gt;
SourceRoleName='QuickSightFullAccess'&lt;br&gt;
SourceRegion=&lt;br&gt;
IncludeDependencies=True&lt;/p&gt;

&lt;p&gt;Resource_Arns_Dashboard=&lt;br&gt;
Resource_Arns_Analysis=&lt;/p&gt;

&lt;p&gt;account_id=&lt;br&gt;
analysis_id=&lt;br&gt;
region=&lt;br&gt;
dashboardId=&lt;/p&gt;

&lt;p&gt;client = boto3.client('quicksight', region)&lt;br&gt;
client_s3 = boto3.client('s3', region)&lt;/p&gt;

&lt;h1&gt;
  
  
  Call describe_analysis() to retrieve the analysis details
&lt;/h1&gt;

&lt;p&gt;try:&lt;br&gt;
    response=client.&lt;strong&gt;start_asset_bundle_export_job&lt;/strong&gt;(&lt;br&gt;
        AwsAccountId=SourceAccountID ,&lt;br&gt;
        AssetBundleExportJobId= 'job-1',&lt;br&gt;
        ResourceArns= Resource_Arns_Analysis, &lt;br&gt;
        ExportFormat='QUICKSIGHT_JSON',&lt;br&gt;
        IncludeAllDependencies=IncludeDependencies&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    )
print(response)

time.sleep(60)


res=client.**describe_asset_bundle_export_job**(
    AwsAccountId=SourceAccountID,
    AssetBundleExportJobId='job-1'        

    )

print(res)

# save the exported asset bundle file into S3

for key,value in res.items():
    if (key=='DownloadUrl'):
        download_url=value;
        print(download_url)


r=requests.get(download_url, allow_redirects=True)
open('job-1.zip','wb').write(r.content)


#target_s3_client=targetsession.client('s3')

bucket_name="assetbundle-" + SourceAccountID + "-" + SourceRegion
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  uploading file to s3 bucket
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client_s3.upload_file(

    Filename='job-1.zip',
    Bucket="qsbackup",
    Key='Imported/job-1.zip' 

    )


# import dashboard back to target account 

res_import=client.**start_asset_bundle_import_job**(
    AwsAccountId=SourceAccountID,
    AssetBundleImportJobId='Test_Import',
    AssetBundleImportSource={"S3Uri":"s3://assetbundle-" +SourceAccountID+ "-" +SourceRegion+"/Imported/job-1.zip"},
    OverrideParameters={

         'Analyses': [
        {
            'AnalysisId': &amp;lt;Analysis ID&amp;gt;,
            'Name': 'NEWLY_IMPORTED_USING_API'
        },

        ]

        }

    )

print(res_import)

time.sleep(60)

res_import_describe=client.**describe_asset_bundle_import_job**(
    AwsAccountId=SourceAccountID,
    AssetBundleImportJobId='Test_Import'        

    )

print(res_import_describe)


# update permissions

res_update_permissions=client.**update_analysis_permissions**(
    AwsAccountId=SourceAccountID,
    AnalysisId=SourceAnalysisId, 
    GrantPermissions= [
        {

            'Principal':&amp;lt;principal ARN&amp;gt; ,
            'Actions': [
                "quicksight:RestoreAnalysis",
                "quicksight:UpdateAnalysisPermissions",
                "quicksight:DeleteAnalysis",
                "quicksight:QueryAnalysis",
                "quicksight:DescribeAnalysisPermissions",
                "quicksight:DescribeAnalysis",
                "quicksight:UpdateAnalysis"


                ]


            }




        ]


    )

print(res_update_permissions)

#analysis_name = response['Analysis']['Name']
dashboard = response
print(dashboard)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;except TypeError as e:&lt;br&gt;
    print('Caught TypeError:', e)    &lt;/p&gt;

&lt;p&gt;print('End')   &lt;/p&gt;

</description>
      <category>automation</category>
      <category>amazonquicksight</category>
      <category>visualization</category>
      <category>deployment</category>
    </item>
    <item>
      <title>Glue Data Brew- Data Profiling &amp; Data Quality</title>
      <dc:creator>Neeraj Iyer</dc:creator>
      <pubDate>Wed, 27 Dec 2023 22:13:23 +0000</pubDate>
      <link>https://dev.to/neeraj_iyer_980804515a5da/glue-data-brew-data-profiling-data-quality-1dio</link>
      <guid>https://dev.to/neeraj_iyer_980804515a5da/glue-data-brew-data-profiling-data-quality-1dio</guid>
      <description>&lt;p&gt;With the evolution of technology, we can see data is growing exponentially, new sources of data, diverse data, and is accessed by many applications. But 80% of the time is spent in preparing data today. &lt;/p&gt;

&lt;p&gt;Data analysts, data engineer , ETL developers and Data scientists need the right tool for right job so that they do not have to spend hours in a tool that they do not have to use everyday. &lt;/p&gt;

&lt;p&gt;Glue data brew is a serverless , no code data preparation tool for data analysts and data scientists. Data Brew can be accessed by the AWS Management console , using plugin for Jupyter notebooks &amp;amp; Sagemaker studio.&lt;/p&gt;

&lt;p&gt;Glue Data brew is a tool that can be used for data transformation and data munging.&lt;/p&gt;

&lt;p&gt;With Glue data brew data analysts and data scientists can understand the data quality and detect anomalies , clean and normalize data using over 250 built-in transformations, understand the steps that data has been through using visual data lineage , save the transformations and use it again when new data comes in which is also called recipes. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Following are the Glue DataBrew functionalities:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdb9ixulqzugz0vmn3hsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdb9ixulqzugz0vmn3hsd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does it work?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;you can have your data sources in a data lake, local files, Glue data catalog, Redshift , JDBC with permissions applies to each of these data sources.  Glue DataBrew can get data from these data sources, join data from these sources, apply transformations to the data, build recipes based on your transformation and also schedule your Glue job.  This will help you in cleaning the data , re-use recipes, data profiling , maintaining data quality and lineage. The transformed data can be used by a variety of targets like visualization/reporting tools, notebooks, sagemaker model, ETL pipelines &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some use cases for Glue DataBrew are:&lt;/strong&gt;&lt;br&gt;
One-time data analysis for business reporting&lt;br&gt;
set up  data quality rules with AWS Lambda&lt;br&gt;
Data preprocessing for Machine Learning&lt;br&gt;
Orchestrating data preparation in workflows&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Profiling&lt;/strong&gt;&lt;br&gt;
DataBrew creates a report called a data profile when you profile your data.  It basically gives you existing shape of your data , including context of the content , structure of data and its relationships. A data profile can be created for any dataset by running a data profile job.&lt;/p&gt;

&lt;p&gt;Using DataBrew you can evaluate the quality of your data by profiling it to understand data patterns and detect anomalies. you can also examine and collect statistical summaries about the data in data profile section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzakjcet7hwvflxu9hrw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzakjcet7hwvflxu9hrw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating data profile&lt;/strong&gt;&lt;br&gt;
Once you have loaded your data , you can navigate to datasets and select the dataset that you loaded and click Run Data profile. If its the first time creating a profile job it will give you a prompt and you click on it to create a new profile job using a name. you select the job output details on where you want to load your target dataset. Under data profile configurations, you will find a variety of configurations. you can select Enable PII Statistics and select all categories. you can apply default permissions and then create and run job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F139or54a2rv9gewkex5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F139or54a2rv9gewkex5h.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It gives you a summary of identified PII columns mapped to PII categories. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data quality Check&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;you can create data quality rules over your dataset. you will have to provide a name for the data quality rule and add rule. you can either create rules on your own or go by the recommendations provided by Glue DataBrew. if you are creating a new rule then you have to provide a data quality scope , rule success criteria , if multiple data checks then get them added by condition. Once the rulesets are created you can click Create ruleset. Once you provide a name for the job, you can create and run the job. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcddsiyxui0ba5sy5pyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcddsiyxui0ba5sy5pyr.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldv4jgcrrrtcrkoe0ged.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldv4jgcrrrtcrkoe0ged.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the job run finishes, you can look at your data profile sections and view the summary  and also the column statistics tab and view data quality rules in action under data quality rules tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k7kzp5s4frgszk9iyqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k7kzp5s4frgszk9iyqc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl168w572b3on339414h2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl168w572b3on339414h2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;you can see below that the data quality checks have failed as there are duplicate rows and quantity and total sales &amp;gt;0 rule fails. you can filter out such rows using advance transforms. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhy8xv81xywdcmki46y9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhy8xv81xywdcmki46y9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next blog we will see on how we could apply transformations on the same dataset and eliminate the data quality checks that have failed.&lt;/p&gt;

</description>
      <category>glue</category>
      <category>databrew</category>
      <category>dataengineering</category>
      <category>analyst</category>
    </item>
    <item>
      <title>Exploring Partyrock, an Amazon Bedrock playground to build generative AI applications quickly</title>
      <dc:creator>Neeraj Iyer</dc:creator>
      <pubDate>Tue, 26 Dec 2023 22:20:38 +0000</pubDate>
      <link>https://dev.to/neeraj_iyer_980804515a5da/exploring-partyrock-an-amazon-bedrock-playground-to-build-generative-ai-applications-quickly-4j0c</link>
      <guid>https://dev.to/neeraj_iyer_980804515a5da/exploring-partyrock-an-amazon-bedrock-playground-to-build-generative-ai-applications-quickly-4j0c</guid>
      <description>&lt;p&gt;PartyRock, an Amazon Bedrock Playground is for educating users on how to build with generative AI and gain intuition on prompt engineering. It is easy to experiment hands-on with prompt engineering in an intuitive and fun way.&lt;br&gt;
It is making building with generative AI accessible to everyone.&lt;br&gt;
you can initially go to this link - &lt;a href="https://partyrock.aws/"&gt;https://partyrock.aws/&lt;/a&gt; and get logged in or signed up if its your first time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PYwRkbBi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n20ic2powq2nbydc5mr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PYwRkbBi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n20ic2powq2nbydc5mr1.png" alt="Image description" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Access to partyrock is not through AWS management console but through a web based UI for which you do not require an AWS account. You can take a quick tour of the website of what all it can do for you. if you click &lt;strong&gt;build your own app&lt;/strong&gt;, you will be able to start creating your first app using partyrock.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aWCw70h9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6yqw9bnfswlq9qlfkfpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aWCw70h9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6yqw9bnfswlq9qlfkfpw.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can describe what you would like to build here and click the generate app to build an app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ohFO3Rdx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5adj44xmfskfzukpor27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ohFO3Rdx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5adj44xmfskfzukpor27.png" alt="Image description" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I built a 5 day gym routine recommender that recommends me warm up, main exercises &amp;amp; cool down for each day. I am able to customize my application based on my needs within few seconds. How cool is that!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GxZo__Nu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/622zcxq3did8gb9nzhvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GxZo__Nu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/622zcxq3did8gb9nzhvq.png" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;br&gt;
I can now also tweak my app using edit or remix using other apps. Bedrock will help us edit the app and re build it for us.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aT-2ZMzN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ynrpefc8spg2qci626e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aT-2ZMzN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ynrpefc8spg2qci626e.png" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rVNewcBt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c6vtpa7dffms7ii5wf29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rVNewcBt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c6vtpa7dffms7ii5wf29.png" alt="Image description" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once I entered the goal and experience level, Partyrock recommended me a routine with a 5 day split routine which is super useful. I can tweak the app now based on my needs. Once the app is ready to be published, you can share it with the link provided. &lt;br&gt;
Partyrock is very easy to use Amazon Bedrock playground. The best part is one does not require coding experience. Partyrock uses Amazon Bedrock's generative AI Foundation Models (FM). PartyRock the name has been derived from Amazon Bedrock's offering for business users which is an alternative to Microsoft's Copilot Studio.&lt;br&gt;
Partyrock is available for free to use for a limited time without the need of a credit card. Everyone can build AI apps with partyrock.&lt;br&gt;
Here is a link to the generated app - &lt;a href="https://partyrock.aws/u/neerajiyer/s8mIMnGNq/5-Day-Gym-Routine-Recommender"&gt;https://partyrock.aws/u/neerajiyer/s8mIMnGNq/5-Day-Gym-Routine-Recommender&lt;/a&gt;&lt;/p&gt;

</description>
      <category>partyrock</category>
      <category>generativeai</category>
      <category>amazonbedrock</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>AWS Certified Data Analytics - Specialty (DAS-C01) My Way of Exam preparation</title>
      <dc:creator>Neeraj Iyer</dc:creator>
      <pubDate>Sun, 12 Mar 2023 04:41:14 +0000</pubDate>
      <link>https://dev.to/neeraj_iyer_980804515a5da/aws-certified-data-analytics-specialty-das-c01-exam-preparation-2beh</link>
      <guid>https://dev.to/neeraj_iyer_980804515a5da/aws-certified-data-analytics-specialty-das-c01-exam-preparation-2beh</guid>
      <description>&lt;p&gt;The AWS Certified Data Analytics - Specialty (DAS-C01) exam is definitely a difficult and challenging certification exam!&lt;br&gt;
It has 65 questions that you must finish within 3 hours. Also be aware that its scenario questions are lengthy. If you have already taken solutions architect associate or cloud practitioner exam you will find good difference in terms of the length of the questions. You will come across even paragraphs in some questions in your data analytics specialty certification. Make sure you have good practice of reading long questions and answering them quickly.&lt;/p&gt;

&lt;p&gt;According to the AWS’s exam guide, the target candidate should have a minimum of 5 years of experience with common data analytics technologies. The target candidate also should have at least 2 years of hands-on experience and expertise working with AWS services to design, build, secure, and maintain analytics solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Below is the breakdown of the course by sections:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AfdhU_XZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nn75y01toojr37egnexy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AfdhU_XZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nn75y01toojr37egnexy.png" alt="Image description" width="624" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the table above, the domain that needs to be focused on the most is Processing domain along with Analysis and visualization domain. The processing domain has EMR which has a focus on Big data concepts but I was not asked many questions related to EMR. But had a lot of questions on Glue and its integration with other services. According to me equal importance needs to be given to all domains as the questions will be a mix from different domains as they will expect you to know integration between different services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My way of exam preparation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I referred the Udemy course for my preparation- &lt;a href="https://www.udemy.com/course/aws-data-analytics/learn/quiz/4609131/test#overview"&gt;AWS Certified Data Analytics Specialty 2023- Hands on&lt;/a&gt; I took almost 6-8 weeks to finish the course. I made sure I completed one domain and went to the next domain. The way the course is designed is very easy to learn as the domains and some topics were related to each other. Also, the practice questions at the end of each domain were helpful to revise the topics. The labs/exercise/hands on lab were helpful to get some hands-on practice as well. I made notes from each topic and revised it regularly as there are a lot of things to remember in this exam. &lt;/p&gt;

&lt;p&gt;My next step after this course was to take &lt;a href="https://www.whizlabs.com/aws-certified-data-analytics-specialty/"&gt;whizlabs practice test&lt;/a&gt;.These questions were too difficult and I did not feel the questions in the actual exam were as difficult as whizlabs practice questions. As the questions from these practice tests were too detailed and were not close to the actual exam.&lt;/p&gt;

&lt;p&gt;I decided to take the tutorials dojo practice tests after that. &lt;a href="https://portal.tutorialsdojo.com/courses/aws-certified-data-analytics-specialty-practice-exams/"&gt;Tutorials Dojo Practice tests&lt;/a&gt;. These practice tests were a game changer for me. As I started gaining confidence to take the exam only after taking these practice tests. These tests were very close to the actual exam. The explanations after each question and the links to topics were outstanding. I referred to all the topics below each explanation and got to learn more from them. The only thing about these tests were that they did not have many questions from Elastic search or MSK. There were a couple of questions from elastic search and MSK. &lt;/p&gt;

&lt;p&gt;I identified the topics that I should refer and services that I lack knowledge in. I then referred to FAQ's of each and every service and made notes out of them. Most of the FAQ's were a repeat for me but good revision for me. I prepared a good set of last minutes topics to revise and that helped me a lot a day or two prior to the exam. &lt;/p&gt;

&lt;p&gt;One important thing I would mention is please get a good nights sleep. It is super important for you to stay fresh for all 3 hours of the exam. So sleep early and have some energy bars before taking the exam. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Topics to focus on:&lt;/strong&gt;&lt;br&gt;
AWS Glue - Glue and its integrations with other services&lt;br&gt;
Redshift – Make sure you scan the FAQ’s before taking the exam as this service has a lot of questions.&lt;br&gt;
Amazon EMR- Many questions were not asked from this topic&lt;br&gt;
Amazon Kinesis – This is a super important topic as there a lot confusing questions from this topic.&lt;br&gt;
Amazon Athena- Lot of questions from this topic along with its integration with Redshift and Glue&lt;br&gt;
AWS Lake formation – There were a couple of questions from this topic specifically related to permissions and cross account access.&lt;br&gt;
Amazon Quicksight- Couple of questions from this topic but were easy enough to answer them if you have referred the material well. &lt;/p&gt;

&lt;p&gt;Here are the words or phrases that you would see  within the question. Some of the examples are as below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;minimize administrative tasks&lt;/li&gt;
&lt;li&gt;minimal coding effort&lt;/li&gt;
&lt;li&gt;most cost-effective&lt;/li&gt;
&lt;li&gt;most efficient way&lt;/li&gt;
&lt;li&gt;minimal development effort&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;List of whitepapers good to refer before taking the exam:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="http://d0.awsstatic.com/whitepapers/Big_Data_Analytics_Options_on_AWS.pdf"&gt;Big data Analytics on AWS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/whitepapers/?audit=2019q1&amp;amp;whitepapers-main.sort-by=item.additionalFields.sortDate&amp;amp;whitepapers-main.sort-order=desc&amp;amp;awsf.whitepapers-content-type=*all&amp;amp;awsf.whitepapers-global-methodology=*all&amp;amp;awsf.whitepapers-tech-category=*all&amp;amp;awsf.whitepapers-industries=*all&amp;amp;awsf.whitepapers-business-category=*all"&gt;AWS Whitepapers and guides&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://d1.awsstatic.com/whitepapers/Migration/migrating-applications-to-aws.pdf"&gt;https://d1.awsstatic.com/whitepapers/Migration/migrating-applications-to-aws.pdf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://d1.awsstatic.com/whitepapers/RDS/AWS_Database_Migration_Service_Best_Practices.pdf"&gt;Database Migration Service Best Practice&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-data-analytics-specialty/AWS-Certified-Data-Analytics-Specialty-Exam-Guide_v1.0_08-23-2019_FINAL.pdf"&gt;Exam Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-data-analytics-specialty/AWS-Certified-Data-Analytics-Specialty_Sample-Questions_v.1.1_FINAL.pdf"&gt;AWS Certified Data Analytics sample questions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/big-data/optimize-memory-management-in-aws-glue/"&gt;Optimize memory in AWS Glue&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-techniques-for-amazon-redshift/"&gt;Top 10 performance tuning techniques for Amazon Redshift&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;FAQ’s that I would recommend reading-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/lake-formation/faqs/"&gt;AWS Lake Formation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/glue/faqs/"&gt;AWS Glue&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/redshift/faqs/"&gt;AWS Redshift&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/emr/faqs/"&gt;Amazon EMR&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/athena/faqs/"&gt;Amazon Athena&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/quicksight/resources/faqs/"&gt;Amazon Quicksight&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.amazonaws.cn/en/dms/faqs/"&gt;AWS DMS&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/msk/faqs/"&gt;Amazon MSK&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/opensearch-service/faqs/"&gt;Amazon Opensearch&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/kinesis/data-streams/faqs/"&gt;Kinesis Data Streams&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/kinesis/data-firehose/faqs/"&gt;Kinesis Data Firehose&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/kinesis/data-analytics/faqs/"&gt;Kinesis Data Analytics&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/s3/faqs/"&gt;Amazon S3&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/dynamodb/faqs/?trk=94bf4df1-96e1-4046-a020-b07a2be0d712&amp;amp;sc_channel=ps&amp;amp;s_kwcid=AL!4422!3!610000101510!p!!g!!dynamodb&amp;amp;ef_id=Cj0KCQiA6rCgBhDVARIsAK1kGPIEIECNG7KO3rp5GZyeJqqMZD7-NrxCaQ4iHS96iuzM4j2V729oQcYaAnsPEALw_wcB:G:s&amp;amp;s_kwcid=AL!4422!3!610000101510!p!!g!!dynamodb"&gt;Amazon Dynamodb&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Good Luck on your exam preparation!!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>data</category>
      <category>analytics</category>
      <category>exam</category>
    </item>
  </channel>
</rss>
