<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: anilkulkarni87</title>
    <description>The latest articles on DEV Community by anilkulkarni87 (@anilkulkarni87).</description>
    <link>https://dev.to/anilkulkarni87</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anilkulkarni87"/>
    <language>en</language>
    <item>
      <title>Treasure data digdag visualization</title>
      <dc:creator>anilkulkarni87</dc:creator>
      <pubDate>Fri, 25 Mar 2022 00:02:42 +0000</pubDate>
      <link>https://dev.to/anilkulkarni87/treasure-data-digdag-visualization-3n66</link>
      <guid>https://dev.to/anilkulkarni87/treasure-data-digdag-visualization-3n66</guid>
      <description>&lt;p&gt;Treasure data is a leading CDP platform and leverages dig-dag for data pipelines within itself. digdag is similar to airflow where we define tasks inside a dag &lt;br&gt;
and is available as an open source. The CDp is a powerful platform but unfortunately no easy way to visualize the pipelines which makes it hard for onboarding &lt;br&gt;
when there are some complex pipelines already built. So I had the below sequence of steps to solve this problem for myself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution Approach
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create Visualization&lt;/li&gt;
&lt;li&gt;Make it colorful&lt;/li&gt;
&lt;li&gt;Update the graph everytime there is a code change.&lt;/li&gt;
&lt;li&gt;Make it part of CICD&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://anilkulkarni87.github.io/treasure-data-digdag-graph/"&gt;Navigate to html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Usage example
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Update Line 131 with the project where the graphs need to be updated.&lt;/li&gt;
&lt;li&gt;Execute the python program to generate the graphs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  TODO
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Parametrize the dir_name (Line 131) &lt;/li&gt;
&lt;li&gt;Create github workflow to generate graphs when there are changes.

&lt;ul&gt;
&lt;li&gt;Identify the changes at the folder level &lt;/li&gt;
&lt;li&gt;Execute the python script tp build graph for each folder with a change&lt;/li&gt;
&lt;li&gt;Commit the graphs back to the branch&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/y-abe/digdag-graph"&gt;https://github.com/y-abe/digdag-graph&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>treasuredata</category>
    </item>
    <item>
      <title>Airflow Api tests</title>
      <dc:creator>anilkulkarni87</dc:creator>
      <pubDate>Mon, 09 Aug 2021 16:54:13 +0000</pubDate>
      <link>https://dev.to/anilkulkarni87/airflow-api-tests-5fgb</link>
      <guid>https://dev.to/anilkulkarni87/airflow-api-tests-5fgb</guid>
      <description>&lt;h1&gt;
  
  
  airflow-api-tests
&lt;/h1&gt;

&lt;p&gt;This is a collection of Pytest for the 2.0 Stable Rest Apis for Apache Airflow. I have another repo where you could setup airflow locally and play around with these. I am used to RestAssured, but trying out pytest here.&lt;/p&gt;

&lt;h3&gt;Apache Airflow 2.0 Stable Rest Api calls - Python&lt;/h3&gt;

&lt;p&gt; I am used to RestAssured for api testing. Trying out python api testing with Airflow stable rest api
    &lt;br&gt; 
&lt;/p&gt;

&lt;h2&gt;
  
  
  📝 Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;About&lt;/li&gt;
&lt;li&gt;
Uses
&lt;/li&gt;
&lt;li&gt;Getting Started&lt;/li&gt;
&lt;li&gt;Usage&lt;/li&gt;
&lt;li&gt;Running the tests&lt;/li&gt;
&lt;li&gt;Github Workflow&lt;/li&gt;
&lt;li&gt;Airflow Apis&lt;/li&gt;
&lt;li&gt;Authors&lt;/li&gt;
&lt;li&gt;Acknowledgments&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧐 About &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This repository will contain api calls to stable Airflow Rest apis. Having worked in airflow, I felt the lack of it. Come Airflow 2.0, it resolved the issues for me.&lt;br&gt;
It is also an attempt to understand api testing with python as I usually prefer Rest Assured (Java).  This is a WIP repo as I continue to build and add calls to all apis. &lt;br&gt;
I have another repo which can help people setup airflow locally and then play around with these api.&lt;/p&gt;
&lt;h2&gt;
  
  
  Uses &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Listing out some reasons why I created this for myself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learn about airflow apis.&lt;/li&gt;
&lt;li&gt;Learn about Python api testing.&lt;/li&gt;
&lt;li&gt;Understand use cases of api and document for future needs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  🏁 Getting Started &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Sequence of steps to be followed to be able to use this successfully:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clone the &lt;a href="https://github.com/anilkulkarni87/airflow-docker"&gt;airflow-docker&lt;/a&gt; repo.&lt;/li&gt;
&lt;li&gt;Follow the instructions there to start running airflow locally.&lt;/li&gt;
&lt;li&gt;Execute some DAGS that are part of my repo or create your own DAGS.&lt;/li&gt;
&lt;li&gt;Start playing with the apis.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  🎈 Usage &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;
&lt;h2&gt;
  
  
  🔧 Running the tests &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Here is an example of how you could run the tests. I will continue to evolve this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pytest test_dag.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Github Workflow for running tests &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;I added this step for me to understand more about github workflows and how i can leverage it for this specific usecase. Essentially what I will have to do is within the workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clone airflow-docker repo&lt;/li&gt;
&lt;li&gt;Start Airflow&lt;/li&gt;
&lt;li&gt;Run these tests&lt;/li&gt;
&lt;li&gt;Do something with the results&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ⛏️Airflow APIs &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;I know you could get your hands on the swagger doc for these. But I still wanted to list down here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/anilkulkarni87/airflow-api-tests/blob/main/tests/test_config.py"&gt;Get Config&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;This is usually forbidden from the administrator owing to security reasons.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/anilkulkarni87/airflow-api-tests/blob/main/tests/test_connection.py"&gt;Connection&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Get connections : To list all connections currently available&lt;/li&gt;
&lt;li&gt;Post : To create a new connection Id&lt;/li&gt;
&lt;li&gt;Patch : To patch means to update an existing connection Id&lt;/li&gt;
&lt;li&gt;Delete : To delete a connection&lt;/li&gt;
&lt;li&gt;Get Connection Id : Get details of a specific connection based on the id&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/anilkulkarni87/airflow-api-tests/blob/main/tests/test_dag.py"&gt;DAG&lt;/a&gt; 

&lt;ul&gt;
&lt;li&gt;Get Dag Source code : We can get the dag source code by passing a file token, We can get file_token by calling the get dag by dag id&lt;/li&gt;
&lt;li&gt;Get DAGS : Able to get all DAGS&lt;/li&gt;
&lt;li&gt;Get DAG Info : Get basic info about a DAG&lt;/li&gt;
&lt;li&gt;Patch : This is to update the dag. You can refer the example I have.&lt;/li&gt;
&lt;li&gt;Post - clear tasks instances of a DAG : #TODO&lt;/li&gt;
&lt;li&gt;Get Dag details : Get Basic info about dags&lt;/li&gt;
&lt;li&gt;Get Dag tasks : Get all tasks for the dag.&lt;/li&gt;
&lt;li&gt;Get Task details : Get a representation of the task&lt;/li&gt;
&lt;li&gt;Post UpdateTaskInstanceState : #TODO&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ✍️ Authors &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/anilkulkarni87"&gt;@anilkulkarni87&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>airflow</category>
      <category>airflowapi</category>
      <category>apitest</category>
    </item>
    <item>
      <title>Publish blog posts from GIT to dev.to</title>
      <dc:creator>anilkulkarni87</dc:creator>
      <pubDate>Sat, 03 Jul 2021 01:09:58 +0000</pubDate>
      <link>https://dev.to/anilkulkarni87/publish-blog-posts-from-git-to-dev-to-32ff</link>
      <guid>https://dev.to/anilkulkarni87/publish-blog-posts-from-git-to-dev-to-32ff</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/anilkulkarni87/dev.to/actions"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D6hgeqwO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/anilkulkarni87/dev.to/workflows/CI/badge.svg" alt="Actions Status"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Publish blog posts from GITHUB to dev.to
&lt;/h1&gt;

&lt;p&gt;This is a template repo which where I am testing and building solution for automated publishing of my articles/blog posts to dev. It's just a lot of work to post the same article across places and hence I decided to automate publishing my posts to dev and eventually any other personal blogs.&lt;/p&gt;

&lt;p&gt;Dev exposes API through which you could do a lot of things. I am currently only creating articles which will be in your drafts. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_FWVqtYO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/10644132/124338461-f7744b00-db5c-11eb-8047-eb073cc632f3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_FWVqtYO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/10644132/124338461-f7744b00-db5c-11eb-8047-eb073cc632f3.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Generated via code(Python)&lt;/p&gt;

&lt;h2&gt;
  
  
  Approach
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Clone this repo, cleanup the files and arrange your blog posts.&lt;/li&gt;
&lt;li&gt;Create a markdown file for your blog post. You can check the folder structure in the repo.&lt;/li&gt;
&lt;li&gt;Setup your DEV TOKEN in your repo with the name as &lt;code&gt;GIT_TO_DEV&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Commit the changes made to your .md files.&lt;/li&gt;
&lt;li&gt;Voila after the workflow is complete, you can see the blog posts created in draft. &lt;/li&gt;
&lt;li&gt;Also, I am persisting the json payload of the article with the id populated.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Next steps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Identify new files and modified markdown files&lt;/li&gt;
&lt;li&gt;For new files, create new articles&lt;/li&gt;
&lt;li&gt;For modified, update the articles&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Thanks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/Ana06/get-changed-files"&gt;Ana María Martínez Gómez&lt;/a&gt; - Tracking changed files&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/stefanzweifel/git-auto-commit-action"&gt;Stefan Zweifel&lt;/a&gt; - Committing the changes file from a workflow&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/maxime1992/manage-your-dev-to-blog-posts-from-a-git-repo-and-use-continuous-deployment-to-auto-publish-update-them-143j"&gt;Maxime&lt;/a&gt; - For the triggering a thought&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/eyeseast/python-frontmatter"&gt;Chris Amico&lt;/a&gt; - For parsing front matter from Markdown file&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Bugs/Changes
&lt;/h3&gt;

&lt;p&gt;Please modify any typos or corrections and create a pull request to make this better.&lt;/p&gt;

</description>
      <category>devto</category>
      <category>publication</category>
      <category>continuousdeployment</category>
    </item>
    <item>
      <title>How to create LogicalType in Apache Avro </title>
      <dc:creator>anilkulkarni87</dc:creator>
      <pubDate>Mon, 21 Dec 2020 20:24:37 +0000</pubDate>
      <link>https://dev.to/anilkulkarni87/how-to-create-logicaltype-in-apache-avro-5c40</link>
      <guid>https://dev.to/anilkulkarni87/how-to-create-logicaltype-in-apache-avro-5c40</guid>
      <description>&lt;p&gt;I am writing this post to understand how to create a custom LogicalType in Apache Avro. It helps in enforcing strict schema validations for the data in transit as well as rest within their systems. Dividing this into broader sections:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define a schema using Avro IDL&lt;/li&gt;
&lt;li&gt;Generate Java classes using gradle avro plugin&lt;/li&gt;
&lt;li&gt;Define Custom Logical Type&lt;/li&gt;
&lt;li&gt;Test the Custom LogicalType&lt;/li&gt;
&lt;li&gt;Next Steps (There is always a better way)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hZPu56cn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/anilkulkarni.com/wp-content/uploads/2020/12/Avro-IDL.jpg%3Fresize%3D320%252C204%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hZPu56cn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/anilkulkarni.com/wp-content/uploads/2020/12/Avro-IDL.jpg%3Fresize%3D320%252C204%26ssl%3D1" alt="LogicalType in Avro"&gt;&lt;/a&gt;Define schema using Avro IDLLets go over in detail how to create a Custom LogicalType in Apache Avro.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Define a schema using Avro IDL&lt;/strong&gt;Any simple to complex schema can be defined easily using Avro IDL. Maven and Gradle plugins which aid in generation .avsc files and java classes for the defined schema. Below is a sample &lt;a href="https://github.com/anilkulkarni87/AvroLogicalType/blob/master/src/main/java/com/lina/avdl/query.avdl"&gt;schema &lt;/a&gt;defined by me. Read more about &lt;a href="https://avro.apache.org/docs/current/idl.html"&gt;Avro IDL&lt;/a&gt; here. As a standard practice we usually define default value for our fields to accommodate for backward compatibility whenever schema changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate Java classes and schema file&lt;/strong&gt; Gradle avro plugin to our rescue. The plugin has been updated many times. You might notice that what I have is an older version. The first task creates Avro protocol files, next task created schema files (.avsc files), The final task created the required java classes.
The complete build file can be found &lt;a href="https://github.com/anilkulkarni87/AvroLogicalType/blob/master/build.gradle"&gt;here&lt;/a&gt;. &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---3H_tadl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/anilkulkarni.com/wp-content/uploads/2020/12/gradle-avro-plugin.jpg%3Fw%3D665%26ssl%3D1" alt="Gradle Avro Plugin"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define Custom LogicalType&lt;/strong&gt;A logical type is an Avro primitive or complex type with extra attributes to represent a derived type. This basically is a custom defined type and can be leveraged for many use cases. Notice the “reversed” keyword in the .avdl file in Step 1. 
We need to define the LogicalType and the custom conversion for it. For example, What I am doing here is appending the data ingested as part of ‘&lt;em&gt;queryAuthor&lt;/em&gt;‘ field with the word ‘&lt;em&gt;reversed&lt;/em&gt;‘.
Use Cases:
-Create a type for encryption which encrypts the data ingested as well as decrypts the field later. 
-Create definition for Address (Line1, Line2, Line3, State etc ) and use that as Custom Logical type. 
-Choose to do a one way hash of the data ingested.
You can find the LogicalType and Conversion class here:
Logical Type: &lt;a href="https://github.com/anilkulkarni87/AvroLogicalType/blob/master/src/main/java/com/lina/customAvro/ReversedLogicalType.java"&gt;Click here&lt;/a&gt;
Conversion: &lt;a href="https://github.com/anilkulkarni87/AvroLogicalType/blob/master/src/main/java/com/lina/customAvro/ReversedConversion.java"&gt;Click Here&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Another important steps is to register this Logical Type which i am doing in the main method. &lt;br&gt;
  &lt;code&gt;public class AvroExample {&amp;lt;br&amp;gt;&amp;lt;/br&amp;gt;public static void main(String[] args) {&amp;lt;br&amp;gt;&amp;lt;/br&amp;gt;LogicalTypes.register(ReversedLogicalType.REVERSED_LOGICAL_TYPE_NAME, new LogicalTypes.LogicalTypeFactory() {&amp;lt;br&amp;gt;&amp;lt;/br&amp;gt;private final LogicalType reversedLogicalType = new ReversedLogicalType();&amp;lt;br&amp;gt;&amp;lt;/br&amp;gt;@Override&amp;lt;br&amp;gt;&amp;lt;/br&amp;gt;public LogicalType fromSchema(Schema schema) {&amp;lt;br&amp;gt;&amp;lt;/br&amp;gt;return reversedLogicalType;&amp;lt;br&amp;gt;&amp;lt;/br&amp;gt;}&amp;lt;br&amp;gt;&amp;lt;/br&amp;gt;});&lt;/code&gt;&lt;br&gt;
  What I am doing in the Custom conversion is self explanatory if you look at the code below.&lt;br&gt;
  &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pb-2PgiR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/anilkulkarni.com/wp-content/uploads/2020/12/CustomConversion.jpg%3Fw%3D665%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pb-2PgiR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/anilkulkarni.com/wp-content/uploads/2020/12/CustomConversion.jpg%3Fw%3D665%26ssl%3D1" alt="Image of Custom conversion class "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Test the Custom LogicalType&lt;/strong&gt;My approach to test the Logical Type is:
&lt;em&gt;a) Write data to .avro file.
b) Read the data in the file.
c) Read the files using avro tools&lt;/em&gt;
Better explained with some screenshots:
In the main method I am creating an avro file and we shall see the value before the data is saved as .avro file. Then we read the data from the file and print it before conversion happens. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MoAEyC0z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/anilkulkarni.com/wp-content/uploads/2020/12/Testing-LogicalType.jpg%3Fw%3D665%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MoAEyC0z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/anilkulkarni.com/wp-content/uploads/2020/12/Testing-LogicalType.jpg%3Fw%3D665%26ssl%3D1" alt="Testing Logical Type"&gt;&lt;/a&gt;&lt;br&gt;
  We also see the data in the avro file using avro-tools.&lt;br&gt;
  &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gSM4wrTg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/anilkulkarni.com/wp-content/uploads/2020/12/Avrotools-cmdPretty.jpg%3Fresize%3D282%252C130%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gSM4wrTg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i1.wp.com/anilkulkarni.com/wp-content/uploads/2020/12/Avrotools-cmdPretty.jpg%3Fresize%3D282%252C130%26ssl%3D1" alt="Avro-tools"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Next Steps or TODO:&lt;/strong&gt; There is always a better way to do it. I am going to work on these and probably come up with another post
&lt;em&gt;– Github link is &lt;a href="https://github.com/anilkulkarni87/AvroLogicalType"&gt;here&lt;/a&gt;.
– Leverage latest version of plugin to register the LogicalType as part of the build. 
– Notice the TODO in &lt;a href="https://github.com/anilkulkarni87/AvroLogicalType/blob/master/src/main/java/com/lina/query/QueryRecord.java"&gt;QueryRecord.java&lt;/a&gt;. 
– Explore modifying velocity templates to include the LogicalTypes in the auto-generated code.&lt;/em&gt;
&lt;em&gt;– Some of my other &lt;a href="https://anilkulkarni.com/2019/06/how-to-view-spark-history-logs-locally/"&gt;learnings&lt;/a&gt;.&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The post &lt;a href="https://anilkulkarni.com/2020/12/how-to-create-logicaltype-in-apache-avro/"&gt;How to create LogicalType in Apache Avro&lt;/a&gt; appeared first on &lt;a href="https://anilkulkarni.com"&gt;Anil Kulkarni | Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>apacheavro</category>
      <category>gradle</category>
      <category>java</category>
      <category>logicaltype</category>
    </item>
    <item>
      <title>Selenide - UI tests in minutes </title>
      <dc:creator>anilkulkarni87</dc:creator>
      <pubDate>Mon, 09 Mar 2020 22:09:48 +0000</pubDate>
      <link>https://dev.to/anilkulkarni87/selenide-ui-tests-in-minutes-1bbc</link>
      <guid>https://dev.to/anilkulkarni87/selenide-ui-tests-in-minutes-1bbc</guid>
      <description>&lt;p&gt;&lt;a href="https://selenide.org/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZuzHgisk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i2.wp.com/anilkulkarni.com/wp-content/uploads/2020/03/selenide-logo-big.png%3Fw%3D665" alt="Selenide - UI tests in minutes"&gt;&lt;/a&gt;Selenide – UI tests in minutes&lt;/p&gt;

&lt;p&gt;Selenide, is what I discovered as part of a new assignment at work. It involved testing of a product management application, majority of it a web based UI.&lt;/p&gt;

&lt;p&gt;I took a week to get a deeper understanding of the assignment, both in terms of technical and business. Although it said automation is out of scope for now, I wanted to make my work easier. My hunt for suitable tools ended with Selenide.&lt;/p&gt;

&lt;p&gt;It took be around 4 days to automate considerable amount of scenarios which could be presented as ROI or effort savings. Setting up the project with right tech stack took the most time. I spent around 1.5 days in writing tests.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Selenide – For UI tests&lt;/li&gt;
&lt;li&gt;Maven – Build tool&lt;/li&gt;
&lt;li&gt;Junit5 – For writing tests&lt;/li&gt;
&lt;li&gt;Allure – For reporting&lt;/li&gt;
&lt;li&gt;Docker* – For generating an emailable report.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The intent of this post is to share the project setup so that, if anyone else is looking for something similar, they can avoid the effort to set the project up. You can find this in on my &lt;a href="https://github.com/anilkulkarni87/SelenideFramework"&gt;github &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Few thing that I learnt while working on this are listed here :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ability to &lt;a href="https://github.com/anilkulkarni87/SelenideFramework/blob/2df2a73f9435ded2150e51b896bc785bff26cf63/src/main/java/com/lina/utils/LogUtil.java#L11"&gt;log custom messages&lt;/a&gt; in an allure report.&lt;/li&gt;
&lt;li&gt;Ability to use &lt;a href="https://github.com/anilkulkarni87/SelenideFramework/blob/master/src/main/java/com/lina/utils/ExcelUtils.java"&gt;excel sheet as test data input&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Java &lt;a href="https://github.com/anilkulkarni87/SelenideFramework/blob/master/src/main/java/com/lina/framework/EnumData.java"&gt;Enum&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Junit5 &lt;a href="https://github.com/anilkulkarni87/SelenideFramework/blob/2df2a73f9435ded2150e51b896bc785bff26cf63/src/test/java/GoogleSearchTest.java#L47"&gt;parametrized &lt;/a&gt;test.&lt;/li&gt;
&lt;li&gt;Understanding of tags in Junit&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Selenide has definitely worked very well for my use cases and would definitely recommend for people to try. There are a lot more things that I would like to &lt;a href="https://anilkulkarni.com/2019/12/learnings-from-my-last-assignment/"&gt;learn&lt;/a&gt;/understand.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Better way of handling Test data.&lt;/li&gt;
&lt;li&gt;Leveraging picocli or other argument parsers for executing specific tests/modules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://anilkulkarni.com/2020/03/selenide-ui-tests-in-minutes/"&gt;Selenide – UI tests in minutes&lt;/a&gt; appeared first on &lt;a href="https://anilkulkarni.com"&gt;Anil Kulkarni | Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>junit5</category>
      <category>tutorial</category>
      <category>selenide</category>
    </item>
    <item>
      <title>Json Schema and Json Validation </title>
      <dc:creator>anilkulkarni87</dc:creator>
      <pubDate>Fri, 14 Feb 2020 07:51:20 +0000</pubDate>
      <link>https://dev.to/anilkulkarni87/json-schema-and-json-validation-52f7</link>
      <guid>https://dev.to/anilkulkarni87/json-schema-and-json-validation-52f7</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fanilkulkarni.com%2Fwp-content%2Fuploads%2F2020%2F02%2Fimage.png%3Ffit%3D665%252C572%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fanilkulkarni.com%2Fwp-content%2Fuploads%2F2020%2F02%2Fimage.png%3Ffit%3D665%252C572%26ssl%3D1"&gt;&lt;/a&gt;A sample json schema&lt;br&gt;
A discussion in one of my office meetings, led me to think and write about Json Schema. Every post I write is an attempt to get a better understanding of the topic and keep that as a reference for my future projects.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How do you validate the complex nested Json data files?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One approach, I can think of is defining a schema and validating the JSON data against it. I had used this approach to validate a avro schema.&lt;/p&gt;

&lt;h2&gt;
  
  
  Define JSON Schema:
&lt;/h2&gt;

&lt;p&gt;This is the most time consuming part but if done right, the JSON validation is cakewalk. Json Schema came to my rescue. After reading, I was able to understand most of the concepts and write my own &lt;a href="https://github.com/anilkulkarni87/JsonSchema/blob/master/src/main/resources/customer_schema.json" rel="noopener noreferrer"&gt;schema&lt;/a&gt; and also validate a couple of data files. Below are some of the things you can define/achieve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mandatory values.&lt;/li&gt;
&lt;li&gt;Nested objects and fields.&lt;/li&gt;
&lt;li&gt;Data type and additional criteria: 

&lt;ul&gt;
&lt;li&gt;String can be of format:email/uuid etc.&lt;/li&gt;
&lt;li&gt;String can match with pattern.&lt;/li&gt;
&lt;li&gt;Integer – Can have max , multipleOf.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  JSON Validation
&lt;/h2&gt;

&lt;p&gt;I leveraged a gradle plugin to implement the validation as part of the build. So that our next steps of the build are executed only if the Validation task is successful. The errors printed are very precise and easy to understand. You can find this sample project on my &lt;a href="https://github.com/anilkulkarni87/JsonSchema" rel="noopener noreferrer"&gt;github&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fanilkulkarni.com%2Fwp-content%2Fuploads%2F2020%2F02%2Fimage-1.png%3Ffit%3D665%252C175%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fanilkulkarni.com%2Fwp-content%2Fuploads%2F2020%2F02%2Fimage-1.png%3Ffit%3D665%252C175%26ssl%3D1" alt="Output from the gradle plugin."&gt;&lt;/a&gt;Validation Output&lt;/p&gt;

&lt;h4&gt;
  
  
  Next steps and other thoughts:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;To add examples of different schemas Eg: Use definition and references.&lt;/li&gt;
&lt;li&gt;One can also generate schema from a json and then improvise it.&lt;/li&gt;
&lt;li&gt;For the first time, I used github actions to build the project.&lt;/li&gt;
&lt;li&gt;I should write more about some of my other &lt;a href="https://anilkulkarni.com/2019/12/learnings-from-my-last-assignment/" rel="noopener noreferrer"&gt;learnings&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://anilkulkarni.com/2020/02/json-schema-and-json-validation/" rel="noopener noreferrer"&gt;Json Schema and Json Validation&lt;/a&gt; appeared first on &lt;a href="https://anilkulkarni.com" rel="noopener noreferrer"&gt;Anil Kulkarni | Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>json</category>
      <category>jsonschema</category>
      <category>jsonvalidation</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to view Spark History logs locally </title>
      <dc:creator>anilkulkarni87</dc:creator>
      <pubDate>Sat, 04 Jan 2020 00:28:01 +0000</pubDate>
      <link>https://dev.to/anilkulkarni87/how-to-view-spark-history-logs-locally-4656</link>
      <guid>https://dev.to/anilkulkarni87/how-to-view-spark-history-logs-locally-4656</guid>
      <description>&lt;h1&gt;
  
  
  How to: View Spark History logs locally
&lt;/h1&gt;

&lt;p&gt;Spark History logs are very valuable when you are trying to analyze the stats of a specific job. When in cases where you are working with a large cluster , where multiple users are executing jobs or when you have an ephemeral cluster and you want to retain your logs for analysis in future, here’s a way to do it locally.&lt;/p&gt;

&lt;p&gt;How do you analyze the spark history logs locally?&lt;/p&gt;

&lt;h2&gt;
  
  
  Download logs from Spark History Server
&lt;/h2&gt;

&lt;p&gt;The most important thing here and the first step is to download the spark-history logs from the UI before your cluster goes down.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fanilkulkarni.com%2Fwp-content%2Fuploads%2F2019%2F06%2Fnull.png%3Fresize%3D624%252C116%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fanilkulkarni.com%2Fwp-content%2Fuploads%2F2019%2F06%2Fnull.png%3Fresize%3D624%252C116%26ssl%3D1" alt="Spark History Server"&gt;&lt;/a&gt;Spark History Server&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Spark History Server Locally
&lt;/h2&gt;

&lt;p&gt;Below steps would help in setting the history server locally and analyze the logs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On a MacOs : &lt;code&gt;brew install apache-spark&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create a directory for the logs.&lt;/li&gt;
&lt;li&gt;Move the downloaded logs in the previous step to the logs directory and unpack them.&lt;/li&gt;
&lt;li&gt;Create a file named &lt;code&gt;log.properties&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Inside log.properties, add &lt;code&gt;spark.history.fs.logDirectory=&amp;lt;path to the spark-logs directory&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;code&gt;/usr/local/Cellar/apache-spark/&amp;lt;version&amp;gt;/libexec/sbin&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Execute &lt;code&gt;sh start-history-server.sh --properties-file &amp;lt;path to log.properties&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;a href="http://localhost:18080" rel="noopener noreferrer"&gt;http://localhost:18080&lt;/a&gt; on browser.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now you can view and analyze the logs locally.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://anilkulkarni.com/2019/06/how-to-view-spark-history-logs-locally/" rel="noopener noreferrer"&gt;How to view Spark History logs locally&lt;/a&gt; appeared first on &lt;a href="https://anilkulkarni.com" rel="noopener noreferrer"&gt;Anil Kulkarni | Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>logs</category>
      <category>spark</category>
      <category>sparkhistory</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Learnings from my last assignment </title>
      <dc:creator>anilkulkarni87</dc:creator>
      <pubDate>Fri, 03 Jan 2020 06:28:22 +0000</pubDate>
      <link>https://dev.to/anilkulkarni87/learnings-from-my-last-assignment-4gko</link>
      <guid>https://dev.to/anilkulkarni87/learnings-from-my-last-assignment-4gko</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fanilkulkarni.com%2Fwp-content%2Fuploads%2F2019%2F12%2FCollage-of-Logos.jpg%3Fw%3D665" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fanilkulkarni.com%2Fwp-content%2Fuploads%2F2019%2F12%2FCollage-of-Logos.jpg%3Fw%3D665"&gt;&lt;/a&gt;Learnings from my last assignment&lt;/p&gt;

&lt;p&gt;The year 2019 as well as my previous work assignment, has been very interesting, challenging and educating. I happen to work on different technologies and domains. &lt;br&gt;
Here I am jotting down the learnings from my last assignment. While I learnt many things, one of the most important lesson to me goes in sync with a great man’s words “&lt;strong&gt;All I know is, Nothing&lt;/strong&gt;“.&lt;/p&gt;

&lt;h2&gt;
  
  
  Standards
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Importance of Coding and Logging standards for an organization or at-least a team. By defining those standards upfront, the effort saved as the application grows bigger was quite evident to me.&lt;/li&gt;
&lt;li&gt;Importance of defining metrics and having a clear distinction between application and business metrics.&lt;/li&gt;
&lt;li&gt;Identifying standards for publishing events and consuming them.&lt;/li&gt;
&lt;li&gt;Identifying data storage formats, I worked mainly on Apache Avro. The advantages it brings when there is a Schema Evolution.&lt;/li&gt;
&lt;li&gt;Importance Encryption and Tokenization standards for the data we deal with. You know privacy is no more a wishlist, but a law. It also made me understand difference between the two terms used here.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Apache Avro
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Importance of defining a schema/model for the data we deal with and how Avro enforces certain data quality checks in the pipeline.&lt;/li&gt;
&lt;li&gt;I also understood what schema evolution is and how its needs to be a planned move. Enforcing schema compatibility checks before changing them.&lt;/li&gt;
&lt;li&gt;Avro IDL makes it super easy to define/design schemas for the data we deal with.&lt;/li&gt;
&lt;li&gt;Challenges involved with Union and complex union types.&lt;/li&gt;
&lt;li&gt;Impacts of breaking schema changes on production systems and probable solutions to handle that.&lt;/li&gt;
&lt;li&gt;Defining our custom Logical type as part of Avro. Example : Encryption and Tokenization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Kafka
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I learnt a lot about how Kafka can fit into some of the applications, especially when its event based.&lt;/li&gt;
&lt;li&gt;Difference between System and Business events.&lt;/li&gt;
&lt;li&gt;Leveraging Schema registry to enforce checks while producing and consuming messages from topic.&lt;/li&gt;
&lt;li&gt;Kafka headers and how that can be leveraged in the pipeline.&lt;/li&gt;
&lt;li&gt;Producing data with multiple schemas versus single schema to a single topic.&lt;/li&gt;
&lt;li&gt;Importance of metadata (Data about data).&lt;/li&gt;
&lt;li&gt;Kafka Connect and use cases around it.&lt;/li&gt;
&lt;li&gt;A little bit about Kafka Streams.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Spark
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Learnt Scala (just enough for Spark), build and execute jobs by leveraging Livy.&lt;/li&gt;
&lt;li&gt;Understand about partitioning, re-partitioning, data shuffles.&lt;/li&gt;
&lt;li&gt;Consuming messages from Kafka in batch mode.&lt;/li&gt;
&lt;li&gt;Learnt few things about the executor, executor cores and memory management in Spark.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://anilkulkarni.com/2019/06/how-to-view-spark-history-logs-locally/" rel="noopener noreferrer"&gt;Spark History&lt;/a&gt;, Zeppelin notebooks for Spark.&lt;/li&gt;
&lt;li&gt;Unit testing in Spark and its importance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Presto/Hive
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Presto is nothing but SQL, but all the processing in-memory.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://anilkulkarni.com/2019/03/create-udf-presto-1/" rel="noopener noreferrer"&gt;Writing UDF for Presto&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://gist.github.com/anilkulkarni87/2e234f3fcae39ef89ed4dbe15c4dd72b" rel="noopener noreferrer"&gt;Hive QL&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Airflow
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Writing workflow as code and understanding of how Airflow works.&lt;/li&gt;
&lt;li&gt;Creating custom Operators.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Docker and Kubernetes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;How docker can help us have $0 infrastructure cost during development and Unit testing.&lt;/li&gt;
&lt;li&gt;Kubernetes is still a partially known area. I learnt about accessing the pods, managing secrets, YAML files.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Design Patterns – Java
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;From reading about Design patterns to actually to see it being used.&lt;/li&gt;
&lt;li&gt;Importance of Unit Testing and Code Reviews.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Domains
&lt;/h2&gt;

&lt;p&gt;I have listed some of the domains that I have worked on. If one cannot traverse across domains for a specific Customer, then we are hardly making use of the richness of the data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Customer&lt;/strong&gt; – I was able to understand the different challenges a company will have in dealing with Customer data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sales&lt;/strong&gt; – All the different attributes related to a transaction and why its critical to have them available in the system Near Real time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preference&lt;/strong&gt; – For people who work under marketing, the preference of customer plays a great role and with more laws around them, its important to have them updated and be available to the marketing teams.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Loyalty&lt;/strong&gt; – The success of this program can only be measured when the company can leverage this data to its benefit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a brain dump of the previous assignment I worked on but it will also serve as a reminder of all the learnings as well as the unknowns. I also plan to write other posts related to some of the topics listed here as I truly believe in “&lt;strong&gt;To teach is to learn twice&lt;/strong&gt;“.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://anilkulkarni.com/2019/12/learnings-from-my-last-assignment/" rel="noopener noreferrer"&gt;Learnings from my last assignment&lt;/a&gt; appeared first on &lt;a href="https://anilkulkarni.com" rel="noopener noreferrer"&gt;Anil Kulkarni | Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>learnings</category>
      <category>avro</category>
      <category>kafka</category>
      <category>presto</category>
    </item>
    <item>
      <title>How to Create a UDF in Presto-1</title>
      <dc:creator>anilkulkarni87</dc:creator>
      <pubDate>Thu, 21 Mar 2019 08:33:25 +0000</pubDate>
      <link>https://dev.to/anilkulkarni87/how-to-create-a-udf-in-presto-1-4oaf</link>
      <guid>https://dev.to/anilkulkarni87/how-to-create-a-udf-in-presto-1-4oaf</guid>
      <description>&lt;p&gt;How to Create a UDF in Presto-1 This article is going to talk about how we will create a UDF in Presto. When i started looking for creating an UDF, all resources i found spoke about Maven build. But, the one i created is via a gradle build. This Article will cover the below: Project… &lt;a href="https://anilkulkarni.com/2019/03/create-udf-presto-1/"&gt;Read More »&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://anilkulkarni.com/2019/03/create-udf-presto-1/"&gt;How to Create a UDF in Presto-1&lt;/a&gt; appeared first on &lt;a href="https://anilkulkarni.com"&gt;Anil Kulkarni | Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>tech</category>
      <category>gradle</category>
      <category>java</category>
      <category>presto</category>
    </item>
  </channel>
</rss>
