<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jon Baker</title>
    <description>The latest articles on DEV Community by Jon Baker (@itech88).</description>
    <link>https://dev.to/itech88</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/itech88"/>
    <language>en</language>
    <item>
      <title>Recruiter reached out!</title>
      <dc:creator>Jon Baker</dc:creator>
      <pubDate>Thu, 30 Nov 2023 22:41:37 +0000</pubDate>
      <link>https://dev.to/itech88/recruiter-reached-out-6d9</link>
      <guid>https://dev.to/itech88/recruiter-reached-out-6d9</guid>
      <description>&lt;p&gt;Yesterday I had my very first LinkedIn InMail message from a recruiter for a senior data engineer position. Earlier this month I just paid for a resume revamp to focus on being data centric so I could apply to data engineering positions in the future. That being said, I was using Easy Apply to 'spam' out my resume to as many data analyst/engineer positions as I thought I had somewhat of a match. Though I did get rejections from most of them or no response at all in this tough market. So this one recruiter that noticed I took the time and effort to update my LinkedIn profile to look appealing is a good feeling. &lt;/p&gt;

&lt;p&gt;I'm going to pour over the JD I was given, and try to be able to speak, even in passing, about anything listed on there. I want to use my cutting edge technology learning ability to show companies I can learn and be a malleable forward thinking data engineer with room to grow technically and problem solve. &lt;/p&gt;

&lt;p&gt;This is exciting just to be noticed!&lt;/p&gt;

</description>
      <category>recruiter</category>
      <category>beginners</category>
      <category>career</category>
      <category>linkedin</category>
    </item>
    <item>
      <title>Containerizing my Pipeline Service</title>
      <dc:creator>Jon Baker</dc:creator>
      <pubDate>Thu, 16 Nov 2023 19:38:06 +0000</pubDate>
      <link>https://dev.to/itech88/containerizing-my-pipeline-service-54a2</link>
      <guid>https://dev.to/itech88/containerizing-my-pipeline-service-54a2</guid>
      <description>&lt;p&gt;Been a few months and I have not been good at blogging and documenting. Recently I have been more into taking lessons in DataWars.io (free version) and DataCamp on the Data Engineer track which I paid $120 for a year to complete. &lt;/p&gt;

&lt;p&gt;But I have new files to load for the optometry pipeline and I recently successfully containerized the postgres instance and the python pandas pipeline. But the last part of marrying this data pipeline is getting the data to ingest into the postgres DB. All my logs are showing that it's all good! So I've been talking to GPT for such a long time now and I can't figure it out yet. Pretty frustrating, since I'm able to query things that already exist int he postgres instance (that was just copied over from local, where this whole thing was working).&lt;/p&gt;

&lt;p&gt;So it works on my local but not on Docker. I'll keep trying. In other news I am working on a basic net worth app, probably do that in a offline desktop version using python, or Flask since I did a Flask tutorial. That's a fun personal project but I have to take a break from the Docker work. Cool stuff but I am frustrated I can't figure it out and neither can GPT!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>data</category>
      <category>pipeline</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Building my first Data Visualization Project</title>
      <dc:creator>Jon Baker</dc:creator>
      <pubDate>Tue, 12 Sep 2023 15:30:33 +0000</pubDate>
      <link>https://dev.to/itech88/building-my-first-data-visualization-project-3719</link>
      <guid>https://dev.to/itech88/building-my-first-data-visualization-project-3719</guid>
      <description>&lt;p&gt;I'm staring my first real personal project. It's going to be exploratory data analysis and data visualization of active NBA players for the 2023-2024 season. I'm getting more into graphs and charting in python, so I really hope to have some beautiful visualizations that do the storytelling. My mentor has told me that visualization is really a nebulous skill that many roles can have, so it's not a bad thing to be able to understand it and create it. &lt;/p&gt;

&lt;p&gt;I have also signed up for DataCamp as it has an associate certificate in Data Engineering. I took the free week, now I'm paying for the 1 year subscription at $149. If I was a complete beginner, it wouldn't be the best option but since I've been coding for 5 months I feel like I've made progress. There is a lot of fill in the blank that a complete novice wouldn't know what the rest of the surrounding code does, but thankfully I am critically thinking about the rest of the code when I fill in the blank snippets.&lt;/p&gt;

</description>
      <category>visualization</category>
      <category>beginners</category>
      <category>data</category>
      <category>datacamp</category>
    </item>
    <item>
      <title>Learning to code with AI help</title>
      <dc:creator>Jon Baker</dc:creator>
      <pubDate>Thu, 17 Aug 2023 16:09:02 +0000</pubDate>
      <link>https://dev.to/itech88/learning-to-code-with-ai-help-4592</link>
      <guid>https://dev.to/itech88/learning-to-code-with-ai-help-4592</guid>
      <description>&lt;p&gt;Aw man these are going to really raw, unedited thoughts right now. I've been learning to code since late April 2023, only 4 months. I have a busy family life and a full time job but I don't feel my beginner skills are advancing to where I want them to be. I'm using GPT to generate practice problems in python and pandas since the beginning, and it is very helpful to explain what I'm stuck at or why my code sucks. I also have GitHub copilot, which I feel is not having me problem solve directly, I think that's where I'm feeling the most lag in development. So in my data pipeline I am using copilot to help generate the code if I just provide a comment. It's actually super sick AI technology, and does a pretty good job! Now it doesn't understand the macro of what I'm trying to do, but if you give it specific instructions it will read your script and really try to execute what your comment wanted. But when I find myself going back to beginner problems, my brain can't get through them! Or I feel too reliant on looking it up. I know all programmers do Google their problems, but copilot just generating some code that I never dreamed of, I don't think this is the best learning for me. I'm going to turn it off for now and try to get that muscle memory to be better at problem solving when prompts come up.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Ice Cream and Logging</title>
      <dc:creator>Jon Baker</dc:creator>
      <pubDate>Fri, 11 Aug 2023 05:50:54 +0000</pubDate>
      <link>https://dev.to/itech88/ice-cream-and-logging-21fb</link>
      <guid>https://dev.to/itech88/ice-cream-and-logging-21fb</guid>
      <description>&lt;p&gt;I have learned about the logging module and icecream module. They are SUPER helpful especially for a distributed system and microfunctions like mine in this datapipeline. No more random prints to see the workflow, ic() is a great way to see the variable, the function, and the value at a specific time of execution. I am also utilizing a robust logging system to see how the transformations are going, and what was transformed, not found, etc. I also added a decorator function to time each microfunction. These stats I'm getting are going to be very useful when I'm able to scale and push larger CSVs and data into the pipeline. So cool.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>debugging</category>
      <category>python</category>
      <category>datapipleline</category>
    </item>
  </channel>
</rss>
