<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: vivek atwal</title>
    <description>The latest articles on DEV Community by vivek atwal (@vivekatwal).</description>
    <link>https://dev.to/vivekatwal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vivekatwal"/>
    <language>en</language>
    <item>
      <title>Path to Cypher Query tuning</title>
      <dc:creator>vivek atwal</dc:creator>
      <pubDate>Thu, 27 Oct 2022 11:08:04 +0000</pubDate>
      <link>https://dev.to/vivekatwal/path-to-cypher-query-tuning-5hho</link>
      <guid>https://dev.to/vivekatwal/path-to-cypher-query-tuning-5hho</guid>
      <description>&lt;p&gt;What are the things to learn to be able to tune cypher queries&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Preparing for Query Tuning&lt;/li&gt;
&lt;li&gt;How queries work in Neo4j &lt;a href="https://neo4j.com/graphacademy/training-cqt-40/01-cqt-40-how-queries-work-in-neo4j/"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Controlling Row Cardinality&lt;/li&gt;
&lt;li&gt;Neo4j Behind the Scenes&lt;/li&gt;
&lt;li&gt;Optimizing Property Access&lt;/li&gt;
&lt;li&gt;Node Degree Shortcuts&lt;/li&gt;
&lt;li&gt;Monitoring Running Queries&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;you should be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define the terms row and DB hit in the context of Cypher querying &lt;a href="https://neo4j.com/developer/kb/understanding-cypher-cardinality/"&gt;cypher-cardinality&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Use EXPLAIN and PROFILE to identify weaknesses in a query plan&lt;/li&gt;
&lt;li&gt;Use Cypher tools to minimize the number of rows processed in a query&lt;/li&gt;
&lt;li&gt;Use best practices for minimizing property access&lt;/li&gt;
&lt;li&gt;Use monitoring tools to identify the underlying causes of a long-running query &lt;a href="https://neo4j.com/labs/halin/"&gt;1&lt;/a&gt;  &lt;a href="https://github.com/moxious/halin"&gt;2&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Resource to get learn above topics&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=BN5T8IimB78"&gt;Lesser Known Features in Cypher
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=QnozzFP_fPo"&gt;Tuning Cypher
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://community.neo4j.com/t5/neo4j-graph-platform/best-practices-for-queries-that-can-take-hours-to-complete/m-p/27305"&gt;Best practices for queries that can take hours to complete
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://neo4j.com/blog/neo4j-2-2-query-tuning/"&gt;5 Secrets to More Effective Neo4j 2.2 Query Tuning
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>neo4j</category>
      <category>cypher</category>
    </item>
    <item>
      <title>logging in cloudwatch</title>
      <dc:creator>vivek atwal</dc:creator>
      <pubDate>Mon, 04 Jul 2022 10:13:29 +0000</pubDate>
      <link>https://dev.to/vivekatwal/logging-in-cloudwatch-2961</link>
      <guid>https://dev.to/vivekatwal/logging-in-cloudwatch-2961</guid>
      <description>&lt;p&gt;Settingup cloudwatch client in server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo aws configure
sudo yum install amazon-cloudwatch-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;verify installation using below commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ~/.aws/credentials
cat ~/.aws/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check cloudwatch status using below commands or &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/ReportCWLAgentStatus.html"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a status
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a stop
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a start

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setting up python environment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
#pip install python-json-logger
#pip install boto3
#pip install watchtower

import boto3
from pythonjsonlogger import jsonlogger
import watchtower

credentials = boto3.Session().get_credentials()
access_key = credentials.access_key
secret_key = credentials.secret_key
region = ""

cloudwatch_client = boto3.client("logs", 
region_name=region,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key
)



def get_logger(log_group, logger_name):    
    logger = logging.getLogger(logger_name)
    logger.setLevel(logging.DEBUG)

    formatter = jsonlogger.JsonFormatter(fmt='%(asctime)s :: %(lineno)s :: %(levelname)-8s :: %(name)s ::  %(message)s')

    handler = watchtower.CloudWatchLogHandler(log_group_name=log_group, log_stream_name=logger_name, boto3_client=cloudwatch_client)

    handler.setFormatter(formatter)
    logger.addHandler(handler)
    return logger


log_group = "app_name"
login_logger = get_logger(log_group, "login")


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reference:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html"&gt;https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.exasol.com/db/latest/administration/aws/monitoring/aws_cloudwatch_agent.htm"&gt;https://docs.exasol.com/db/latest/administration/aws/monitoring/aws_cloudwatch_agent.htm&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html"&gt;Cloudwatch setting with IAM role&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>aws</category>
      <category>cloudwatch</category>
    </item>
    <item>
      <title>Enabling virtualenv on jupyter notebook</title>
      <dc:creator>vivek atwal</dc:creator>
      <pubDate>Thu, 30 Jun 2022 09:26:32 +0000</pubDate>
      <link>https://dev.to/vivekatwal/enabling-virtualenv-on-jupyter-notebook-3h09</link>
      <guid>https://dev.to/vivekatwal/enabling-virtualenv-on-jupyter-notebook-3h09</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Jupyter notebook make life easier for &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Begineer&lt;/strong&gt; to learn and understand step by step&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Experts&lt;/strong&gt; to explore data,visualiations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mentors&lt;/strong&gt; to create course &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speakers&lt;/strong&gt; for demonstration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While exploring same libraries have different versions and you need to test those version separately. They can be installed on separate virtualenvs.&lt;/p&gt;

&lt;p&gt;Now in order to make this environment quickly accessible, we need to add this env to jupyter notebook as kernels.&lt;/p&gt;

&lt;p&gt;Below are the steps:&lt;/p&gt;

&lt;h2&gt;
  
  
  Jupyter
&lt;/h2&gt;

&lt;p&gt;Installation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install jupyter-core
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Activate the env and the install &lt;code&gt;ipykernel&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install ipykernel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inorder to differentiate different environemnt, you have to name them, Name them with along with packgae version&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ipython kernel install --user --name=envname_pkg_v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view all the environment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jupyter kernelspec list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remove environment from jupyter&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jupyter kernelspec uninstall envname
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>python</category>
      <category>jupyter</category>
    </item>
    <item>
      <title>Aws Elasticsearch Backup</title>
      <dc:creator>vivek atwal</dc:creator>
      <pubDate>Tue, 28 Jun 2022 20:37:58 +0000</pubDate>
      <link>https://dev.to/vivekatwal/aws-elasticsearch-backup-4p91</link>
      <guid>https://dev.to/vivekatwal/aws-elasticsearch-backup-4p91</guid>
      <description>&lt;h2&gt;
  
  
  Understanding basics
&lt;/h2&gt;

&lt;p&gt;Elasticsearch backup works differently when compared to databases like mysql and mongodb where you can specify the backup path in the runtime, &lt;/p&gt;

&lt;p&gt;while in case of elasticsearch, it expects user to specify the backup directory(repository) in advance and Specifying the backup directory in advance is called &lt;code&gt;registering&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;so first step is to &lt;strong&gt;register repository&lt;/strong&gt; and that is done using &lt;code&gt;_snapshot&lt;/code&gt; api. &lt;/p&gt;

&lt;p&gt;Syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_snapshot/&amp;lt;repository_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Backup from one server and restore on same server&lt;/li&gt;
&lt;li&gt;Bacukup from one server and restore to another server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will have to register repository from all the server where you want to restore.&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-snapshots.html"&gt;Official Documentation for snapshots&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://medium.com/docsapp-product-and-technology/aws-elasticsearch-manual-snapshot-and-restore-on-aws-s3-7e9783cdaecb"&gt;Some Visual Explanation on process&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>elasticsearch</category>
    </item>
    <item>
      <title>_stats in elasticsearch</title>
      <dc:creator>vivek atwal</dc:creator>
      <pubDate>Mon, 03 Jan 2022 20:02:42 +0000</pubDate>
      <link>https://dev.to/vivekatwal/stats-in-elasticsearch-2bci</link>
      <guid>https://dev.to/vivekatwal/stats-in-elasticsearch-2bci</guid>
      <description>&lt;p&gt;Interpretation of right data give you a lot of insights, and how to fetch those data plays an important role, Here we are going to see how to use _stats API to retrieve stats data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET my-index/_stats/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This API presents you with lots of data and interpreting the data and creating a meaning out of it, comes with right learning.&lt;/p&gt;

&lt;p&gt;We are mainly going to focus on getting specific data from _stats.&lt;/p&gt;

&lt;p&gt;Objects from _stats output can be divided in 3 groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Indexing includes ==&amp;gt; indexing, merge, refresh, flush, fielddata, segments, translog&lt;/li&gt;
&lt;li&gt;Searching includes ==&amp;gt; get, search, query_cache, request_cache&lt;/li&gt;
&lt;li&gt;others ==&amp;gt; docs, store, warmer, completion, recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of blindly typing &lt;code&gt;&amp;lt;my-index&amp;gt;/_stats&lt;/code&gt; it is very important to access specific data and limited data to focus on interpretation.&lt;/p&gt;

&lt;p&gt;To know about indexing, try &lt;code&gt;GET my-index/_stats/indexing&lt;/code&gt;&lt;/p&gt;

</description>
      <category>elasticsearch</category>
    </item>
    <item>
      <title>How _refresh work in ES in practice</title>
      <dc:creator>vivek atwal</dc:creator>
      <pubDate>Mon, 03 Jan 2022 17:43:42 +0000</pubDate>
      <link>https://dev.to/vivekatwal/how-refresh-work-in-es-in-practice-20da</link>
      <guid>https://dev.to/vivekatwal/how-refresh-work-in-es-in-practice-20da</guid>
      <description>&lt;h2&gt;
  
  
  Learning By doing
&lt;/h2&gt;

&lt;p&gt;Many Processes in elasticsearch are automated and help a new developer to start the journey faster and easily. But later instead of digging deeper, we rely on default setting, this restricts us from exploring and getting a deeper understanding and also limits our visualisation on the flow of working , it also make us poor in troubleshooting.  &lt;/p&gt;

&lt;p&gt;Create a index with 2 shard and set &lt;code&gt;refresh_interval&lt;/code&gt; to 1 minute. We are setting the environment with this setting&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PUT my-index
{
  "settings": {
    "index": {
      "number_of_shards": "2",
      "refresh_interval": "1m"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Execute to see no documents exists&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET my-index/_search
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;run _stats&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET my-index/_stats/refresh,flush
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before moving ahead its important to understand the structure of stats output, Look at &lt;code&gt;primaries.refresh.total&lt;/code&gt;value and &lt;code&gt;primaries.flush.total&lt;/code&gt;, these values are of use to us.&lt;/p&gt;

&lt;p&gt;Now lets insert a document&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST my-index/_doc/1
{
  "description": "inspecting index stats"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and lets query for a document &lt;code&gt;GET my-index/_search&lt;/code&gt; you will not find any document, the documents are only visible after &lt;strong&gt;refresh&lt;/strong&gt; operation takes place. (default refresh interval is 30s, we have to set it to 1 min for experiment purpose).&lt;/p&gt;

&lt;p&gt;lets refresh the index manually, using below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST my-index/_refresh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now check the refresh count using &lt;code&gt;GET my-index/_stats/refresh,flush&lt;/code&gt; and document using &lt;code&gt;GET my-index/_search&lt;/code&gt; .&lt;/p&gt;

&lt;p&gt;Lets Repeat same for deletion of document.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;DELETE my-index/_doc/1&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Check the refresh count using &lt;code&gt;GET my-index/_stats/refresh,flush&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;Perform manual refresh for operation to reflect &lt;code&gt;POST my-index/_refresh&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;After refresh , go to step 2.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What we saw in this article was, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;changes are reflected after _refresh takes place.&lt;/li&gt;
&lt;li&gt;How to check total number of refresh operation using _stats API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These same steps can be performed for &lt;code&gt;_flush&lt;/code&gt; and &lt;code&gt;_forcemerge&lt;/code&gt;&lt;/p&gt;

</description>
      <category>elasticsearch</category>
    </item>
    <item>
      <title>Resume Parser</title>
      <dc:creator>vivek atwal</dc:creator>
      <pubDate>Thu, 09 Sep 2021 20:01:11 +0000</pubDate>
      <link>https://dev.to/vivekatwal/resume-parser-1o9o</link>
      <guid>https://dev.to/vivekatwal/resume-parser-1o9o</guid>
      <description>&lt;p&gt;Parsing Resume is not an easy task. This tasks comes with lot of challenges such as &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;resumes for different doamins(IT, commerce, etc) have differnet parsing challenges&lt;/li&gt;
&lt;li&gt;dealing with different fileformats. (docx,pdf,images)&lt;/li&gt;
&lt;li&gt;dealing with resume formats. (structure of resumes)&lt;/li&gt;
&lt;li&gt;Identifying sections within resumes (Education, Work Experience, personal details, etc)&lt;/li&gt;
&lt;li&gt;Develping an ontology for categorization of domain,skills, designation, etc &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hybrid approach
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Rule Based Approach&lt;/li&gt;
&lt;li&gt;Statistical Apporach&lt;/li&gt;
&lt;li&gt;Machine Learning Based Approach&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Rules Based Approach&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write rules to parse resume and detect diferent sections of resumes using headings.&lt;/li&gt;
&lt;li&gt;Then write separate rules for each section 

&lt;ul&gt;
&lt;li&gt;Work experience: parse comapny name, company location, duration(from-to-end Date)&lt;/li&gt;
&lt;li&gt;Education: Parse Instituion name, year, etc&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Statistical Based Approach&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use statistics to identify the common skills in a particular domain, a very basic way is to count the number of times that skill is mentioned. &lt;/li&gt;
&lt;li&gt;This method also helps to identify if new skill has arised in a particular industry. As more and more candidate start mentioning it the parser will increment the count of that skill in our database. And a threshold will help to qualify the skill.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Machine Learning Based Approach&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On having enough data from above two approach, train model to classify the section&lt;/li&gt;
&lt;li&gt;Trail model to detect NER (location, dates,etc) &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You will always feel your parser lack in perfection, so the correct approach would be to set the threshold around your parser and not getting overwhelmed by all the problems at the same time. and you will also experience the chicken-and-egg problem at start.&lt;/p&gt;

&lt;p&gt;Paid tools &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.daxtra.com/resume-database-software/resume-parsing-software/"&gt;Daxtra&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://turbohire.co/"&gt;Turbohire&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.burning-glass.com/products/lens-suite/"&gt;Burning Glass&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will Keep updating more on this, Please let me know if you are looking for depth on any specific thing for resume parser.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>nlp</category>
      <category>ai</category>
      <category>datascience</category>
    </item>
    <item>
      <title>DynamoDB Expressions</title>
      <dc:creator>vivek atwal</dc:creator>
      <pubDate>Thu, 09 Sep 2021 19:28:00 +0000</pubDate>
      <link>https://dev.to/vivekatwal/dynamodb-expressions-28jo</link>
      <guid>https://dev.to/vivekatwal/dynamodb-expressions-28jo</guid>
      <description>&lt;p&gt;As almost every developer has worked with Relational (Mysql,Postgres,etc) and No-SQL (MongoDB) databases, and over the time has developed the intuition for writing and execution of CRUD operations queries for these databases. DynamoDB breaks this intuition in many ways.&lt;/p&gt;

&lt;p&gt;Lately, I have been working on a project which has DynamoDB as its Database. And like most of the developers, I searched for insert, update and delete queries and started writing the DynamoDB queries with the intuition of  Mysql, Postgres, MongoDB. &lt;/p&gt;

&lt;p&gt;I was able to write basic queries but slowly I started to realise that I am struggling to understand the execution of queries. And then I found the need to go back to Basics of query writing in DynamoDB and that was &lt;strong&gt;Expressions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Before Moving to Expression Let understand the Jargons in Dynamodb&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Item&lt;/strong&gt;:  Refers to row, record, Document in other databases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attribute&lt;/strong&gt;: Refer to column in Mysql, Postgres and Field in MongoDB and Elasticsearch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.dynamodbguide.com/expression-basics/#condition-expressions"&gt;Expressions&lt;/a&gt; are an integral part of using DynamoDB&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Expressions are rules for simple and complex CRUD operations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have to follow strict syntax to write expressions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This strict nature of expression enables DynamoDB for faster execution of queries even on millions of records/Items.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The size of the table doesn't matter latency remains constant i.e  &amp;lt;10ms  for small(1 GB) or big (100 GB) tables &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Types Of Expression&lt;/th&gt;
&lt;th&gt;Read&lt;/th&gt;
&lt;th&gt;Insert&lt;/th&gt;
&lt;th&gt;Update&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Projection expressions&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Key condition expressions&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Condition expressions&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Update expressions&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Filter expressions&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These Tables show the need of Expression for different Database operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Condition expressions
&lt;/h2&gt;

&lt;p&gt;DynamoDB write operations are unconditional, Each operation overwrites an existing item with same primary key.&lt;/p&gt;

&lt;p&gt;Many time need arises that you only want to &lt;em&gt;insert new Item(row/documents) if it does not already exist in the table&lt;/em&gt;. Which means you need some condition to be satisfied, before write operation takes place. &lt;code&gt;ConditionExpression&lt;/code&gt; enables you to write your such conditions. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sequence of query Execution&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;DynamoDB retrieve the Item for given primary key. Atmost only 1 Item is retrieved as it is mandatory to specify the  primary key.&lt;/li&gt;
&lt;li&gt;Then verifies that the retrieved Item satisfies the User condition (Specified in ConditionExpression) or not.&lt;/li&gt;
&lt;li&gt;If User condition is satisfied then write operation is executed or else the execution fails.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Condition expressions&lt;/strong&gt; always operate on an &lt;strong&gt;individual item&lt;/strong&gt; &lt;del&gt;not multiple items&lt;/del&gt;  &lt;strong&gt;when certain conditions are true&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;On first run, this Item is inserted successfully. If you try inserting the same Item again, you'll get an error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;An error occurred (ConditionalCheckFailedException) when calling the PutItem operation: The conditional request failed

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://www.alexdebrie.com/posts/dynamodb-condition-expressions/"&gt;Alex&lt;/a&gt; has explained ConditionExpression in detail, For example you can also refer &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html"&gt;AWS Documentation&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html#WorkingWithItems.ConditionalUpdate"&gt;WorkingWithItems.ConditionalUpdate&lt;/a&gt;, &lt;a href="https://amazon-dynamodb-labs.workshop.aws/hands-on-labs/explore-cli/cli-writing-data.html"&gt;ConditionExpresion PutItem and UpdateItem example&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Update expression
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://www.dynamodbguide.com/expression-basics/updating-deleting-items#updating-items"&gt;Update expressions&lt;/a&gt;&lt;/strong&gt; are used to update a particular attribute in an existing Item.&lt;/li&gt;
&lt;li&gt;This is same as &lt;code&gt;Update set column_name='value'&lt;/code&gt;in mysql.&lt;/li&gt;
&lt;li&gt;DynamoDB Syntax
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;UpdateExpression: 'SET attributeToEdit = :newValue REMOVE attributeToDelete'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Projection expressions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Project expressions are used to specify a subset of attributes you want to receive when reading Items. &lt;/li&gt;
&lt;li&gt;We used these in our GetItem calls.&lt;/li&gt;
&lt;li&gt;GetItem calls are same as select queries in mysql.&lt;/li&gt;
&lt;li&gt;We specify column name in select queries such as &lt;code&gt;Select column1, column2, column3 from tablename&lt;/code&gt; in the same way we use ProjectionExpressions to specify attributes that we want to retireve in query.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key condition expressions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.dynamodbguide.com/expression-basics/querying#using-key-expressions"&gt;&lt;strong&gt;Key condition expressions&lt;/strong&gt;&lt;/a&gt; are used when querying a table with a composite primary key to limit the items selected.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Filter expressions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.dynamodbguide.com/expression-basics/filtering"&gt;&lt;strong&gt;Filter expressions&lt;/strong&gt;&lt;/a&gt; allow you to filter the results of queries and scans to allow for more efficient responses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DynamoDB limitations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-items"&gt;400kb record size&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-api"&gt;Request has limitation of 1 mb&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dynamodb</category>
      <category>expression</category>
      <category>database</category>
      <category>aws</category>
    </item>
    <item>
      <title>SQLite Installation</title>
      <dc:creator>vivek atwal</dc:creator>
      <pubDate>Sat, 03 Jul 2021 10:01:03 +0000</pubDate>
      <link>https://dev.to/vivekatwal/sqlite-installation-4j6k</link>
      <guid>https://dev.to/vivekatwal/sqlite-installation-4j6k</guid>
      <description>&lt;p&gt;I have been working on a project, and there was a need for serverless mini database. I decided to go with sqlite, it has its own advantages and limitations&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;Many times you will run into error those are SQLite version/release specific, they could be&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Features your are looking is available only in new release or specific release&lt;/li&gt;
&lt;li&gt;Your application use old release.&lt;/li&gt;
&lt;li&gt;You may want to try some deprecated feature for some experiment. &lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Example
&lt;/h5&gt;

&lt;p&gt;I was struggling with &lt;strong&gt;upsert&lt;/strong&gt; query&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;upsert_table&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
                             &lt;span class="n"&gt;folder&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                             &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                             &lt;span class="k"&gt;count&lt;/span&gt;
                         &lt;span class="p"&gt;)&lt;/span&gt;
                         &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
                             &lt;span class="s1"&gt;'2021-01'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                             &lt;span class="s1"&gt;'abc.json'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                             &lt;span class="mi"&gt;1&lt;/span&gt;
                         &lt;span class="p"&gt;)&lt;/span&gt;
                         &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;CONFLICT&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
                             &lt;span class="n"&gt;filename&lt;/span&gt;
                         &lt;span class="p"&gt;)&lt;/span&gt;
                         &lt;span class="k"&gt;DO&lt;/span&gt; &lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="k"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;count&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'abc.json'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On local system &lt;a href="https://www.sqlite.org/src/info/884b4b7e502b4e99"&gt;sqlite version 3.28.0&lt;/a&gt;  worked fine and on server I ran into the error with &lt;a href="https://www.sqlite.org/src/info/0c55d179733b46d8"&gt;sqlite version 3.22.0&lt;/a&gt;   &lt;code&gt;Error: near "ON CONFLICT": syntax error&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After some googling i found &lt;code&gt;ON CONFLICT&lt;/code&gt;  is supported on versions &amp;gt; 3.23.0&lt;/p&gt;

&lt;p&gt;The Take away is you may come across many challenges that are release specific, and hence it becomes important to learn to download, compile and build binaries from sources.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Download
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Once you have identified the need of specific release&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;you can go ahead and download from &lt;a href="https://www.sqlite.org/src/timeline?t=release"&gt;sqlite releases&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;scroll to your version and click on hash-id named as check-in&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L6jooLcR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9g3ro3q9itfj76k8ob30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L6jooLcR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9g3ro3q9itfj76k8ob30.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You will be re-directed to page that contains details of that specific release&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--chyybCNm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tih218car42jvjfo66w1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--chyybCNm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tih218car42jvjfo66w1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Instead of downloading by clicking on it we will copy the link of the tar.gz and download using wget, and same command can be used in server as well.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  wget https://www.sqlite.org/src/tarball/884b4b7e/SQLite-884b4b7e.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Use below commands to build the binary&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt;                                  &lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="c"&gt;#  Takes you to Home directory&lt;/span&gt;
&lt;span class="nb"&gt;tar &lt;/span&gt;xzf SQLite-884b4b7e.tar.gz      &lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="c"&gt;#  Unpack the source tree into "sqlite"&lt;/span&gt;
&lt;span class="nb"&gt;mkdir &lt;/span&gt;bld                           &lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="c"&gt;#  Build will occur in a sibling directory&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;bld                              &lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="c"&gt;#  Change to the build directory&lt;/span&gt;
../SQLite-884b4b7e/configure        &lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="c"&gt;#  Run the configure script&lt;/span&gt;
make                                &lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="c"&gt;#  Run the makefile.&lt;/span&gt;
make sqlite3.c                      &lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="c"&gt;#  Build the "amalgamation" source file&lt;/span&gt;
make &lt;span class="nb"&gt;test&lt;/span&gt;                           &lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="c"&gt;#  Run some tests (requires Tcl)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also find this steps in README.md   in downloaded folder SQLite-884b4b7e/&lt;/p&gt;

&lt;p&gt;After Installation append binary path to Linux $PATH variable to access sqlite from anywhere in your system, If you don't run below command you will have to run sqlite from folder where its binary is stored.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export PATH=$HOME/bld:$PATH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: If you find below error during building binaries&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exec:  tclsh:  not found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then install Tcl&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install --reinstall tcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Upgrading python's SQLite
&lt;/h2&gt;

&lt;p&gt;Now you have upgraded sqlite version on your OS. But you won't be able to access the new version from python, Because Python can't use the &lt;code&gt;sqlite3&lt;/code&gt; binary directly. It always uses a module which is linked against the &lt;code&gt;sqlite3&lt;/code&gt; shared library. &lt;/p&gt;

&lt;p&gt;So when you print sqlite version from python , it will give you old version&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;sqlite3&lt;/span&gt;
&lt;span class="n"&gt;sqlite3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sqlite_version&lt;/span&gt; &lt;span class="c1"&gt;#sqlite_version - sqlite version
&lt;/span&gt;&lt;span class="n"&gt;sqlite3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;version&lt;/span&gt; &lt;span class="c1"&gt;# version - pysqlite version
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make python use new sqlite version, we have to update the .so file in linux.&lt;br&gt;
Go to directory where you had build the sqlite from source, You will find &lt;code&gt;libsqlite3.so.0.8.6&lt;/code&gt; in lib folder.&lt;br&gt;
Move &lt;code&gt;libsqlite3.so.0.8.6&lt;/code&gt; to &lt;code&gt;/usr/lib/x86_64-linux-gnu/&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/bld/.libs
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;libsqlite3.so.0.8.6 /usr/lib/x86_64-linux-gnu/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you are ready to use new sqlite version from python.&lt;br&gt;
 &lt;/p&gt;

&lt;h2&gt;
  
  
  Why should you consider using SQLite
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.sqlite.org/index.html"&gt;SQLite&lt;/a&gt;&lt;/strong&gt; is a lightweight, small and self-contained &lt;strong&gt;RDBMS&lt;/strong&gt; in a C library. Popular databases like &lt;strong&gt;MySql&lt;/strong&gt;, &lt;strong&gt;PostgreSQL&lt;/strong&gt;, etc. works in the client-server model and they have a dedicated process running and controlling all the aspects of database operation.&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;SQLite&lt;/strong&gt; has no process running and has no client-server model. SQLite DB is simply an file with &lt;strong&gt;.sqlite3/.sqlite/.db&lt;/strong&gt; extension. Every programming language has a library to support SQLite.&lt;/p&gt;

&lt;p&gt;You can find SQLite being used in&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web browsers(Chrome, Safari, Firefox).&lt;/li&gt;
&lt;li&gt;MP3 players, set-top boxes, and electronic gadgets.&lt;/li&gt;
&lt;li&gt;Internet of Things (IoT).&lt;/li&gt;
&lt;li&gt;Android, Mac, Windows, iOS, and iPhone devices.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are lot more areas where &lt;strong&gt;SQLite&lt;/strong&gt; is used. Every smartphone in the world has hundreds of &lt;strong&gt;SQLite&lt;/strong&gt; database files and there are over one trillion databases in active use. That’s quite huge in numbers.&lt;/p&gt;

</description>
      <category>database</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Undoing Changes in git</title>
      <dc:creator>vivek atwal</dc:creator>
      <pubDate>Wed, 30 Jun 2021 18:46:26 +0000</pubDate>
      <link>https://dev.to/vivekatwal/undoing-changes-in-git-19f0</link>
      <guid>https://dev.to/vivekatwal/undoing-changes-in-git-19f0</guid>
      <description>&lt;h2&gt;
  
  
  Undoing a Commit
&lt;/h2&gt;

&lt;p&gt;Similar to unstaging, we occasionally may want to undo an entire commit. To clarify, when we say "undo a commit", we mean remove that commit from our history, and revert to the point at which the files were staged (with the changes captured in the commit).&lt;br&gt;
    The command used to undo a commit is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git reset --soft HEAD^
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To break this down, &lt;code&gt;HEAD&lt;/code&gt; refers to the current branch, and &lt;code&gt;HEAD^&lt;/code&gt; means one commit back, aka the "parent commit". The &lt;code&gt;--soft&lt;/code&gt; here specifies that we should reset the branch (to point at that parent commit), but otherwise leave the files in the working directory and the index untouched.&lt;/p&gt;

&lt;p&gt;This has the effect of undoing the commit, taking us back to just before we made it. Note that it leaves our changes staged in the index.&lt;/p&gt;

&lt;p&gt;While useful, this is a complex command to remember, so once again we can create an alias, mapping this operation to the intuitive "uncommit" alias name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git config --global alias.uncommit 'reset --soft HEAD^'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Unstaging Files
&lt;/h2&gt;

&lt;p&gt;Occasionally we'll find that after making a round of changes, we realize that some of the changes are entirely unrelated to others. &lt;br&gt;
Perhaps we've run &lt;code&gt;git add .&lt;/code&gt;, but we then realize that we only want to commit two of the three files that are now staged.&lt;br&gt;
Once again, we can use a simple Git command to undo this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git reset filename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Checkout to Undo Changes
&lt;/h2&gt;

&lt;p&gt;To remove a particular file changes from working directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout filename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To remove all the changes from working directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; This is one of the few dangerous operations in Git. If you run &lt;code&gt;checkout&lt;/code&gt; without staging or committing your changes, Git will destroy your work and you will not be able to get it back. Be sure to use caution with &lt;code&gt;git checkout .&lt;/code&gt;!&lt;/p&gt;

</description>
      <category>github</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
