<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: JohnSheehan8</title>
    <description>The latest articles on DEV Community by JohnSheehan8 (@johnsheehan8).</description>
    <link>https://dev.to/johnsheehan8</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/johnsheehan8"/>
    <language>en</language>
    <item>
      <title>How we can use data science to make our nations better</title>
      <dc:creator>JohnSheehan8</dc:creator>
      <pubDate>Wed, 27 Jul 2022 02:34:00 +0000</pubDate>
      <link>https://dev.to/johnsheehan8/how-we-can-use-data-science-to-make-our-nations-better-3mld</link>
      <guid>https://dev.to/johnsheehan8/how-we-can-use-data-science-to-make-our-nations-better-3mld</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;As i am heading towards the end of my time in my computer science boot camp i have realized i needed to see some real life examples of things i could potentially do in the future. the easiest way to do that was look at real life examples of big projects that have been taken in the past.&lt;/p&gt;

&lt;p&gt;i wanted to find a research paper withing the last few years as computer science is ever evolving so taking things from the recent past will hold more weight for me now. the paper i chose was from december 2nd 2021 so its been less than a year since this came out and it was called "How We Determined Crime Prediction Software Disproportionately Targeted Low-Income, Black, and Latino Neighborhoods". the title instantly hooked me because these are things i have always believed happened but would have trouble finding defined evidence to present other people. The final dataset they used for analysis contained more than 5.9 million predictions.&lt;/p&gt;

&lt;p&gt;Now as someone who has a decent amount of knowledge in this field i was able to fully understand all of the code and techniques but how would i be able to explain this to someone who is not as tech-savvy?&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;to begin what exactly are the writers of this paper looking at to make their claims? and how do we know its correct? well, according to the author they obtained PredPol crime prediction data, PredPol was one of the first data analytic tools used by police and is currently the most popular, this data has never before been released by PredPol. One of their associates 'Gizmodo' found it exposed on the open web (the portal is now secured) and downloaded more than seven million PredPol crime predictions for between 2018 and 2021. After securing the data and categorizing it they were able to find a number of factors that happen during most police interactions, such as number of times arrested, how many uses of force, amount of patrols by police and many more, then compared those metrics between different ethnicities and income ranges.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhq9gv9teitsjqvhzo9qo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhq9gv9teitsjqvhzo9qo.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;this is one of their final models and clearly shows that the low end of the spectrum in terms of blocks targeted by PredPol was vast majority white people while blocks with the highest amount of targets from PredPol are considerably disproportionate towards black and latino groups. this clearly shows in real data what a lot of people in America have known for a long time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5u4hf5fyrw55mgedhjv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5u4hf5fyrw55mgedhjv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The creators of this article were then able to show that not only were blacks and Latinos were the group most targeted in block groups that were highly targeted but they also show that the proportion of people who were black or Latino drastically rose in accordance with blocks that are most targeted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z2omsn8vhw22kp103ik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z2omsn8vhw22kp103ik.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another model they ended up using to further cement their point about what they were able to find in their data they were then able to find data on arrest rates sorted by ethnicity. In most counties within the data set you can see that black people are extremely over represented in the amount of arrest rates they are over twice as likely in certain areas of the U.S to be arrested when officers arrive to these predicted areas the algorithm has told them to patrol.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For0t4j61yf00q4hbexgx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For0t4j61yf00q4hbexgx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see not only were black and latino groups targeted by this system but also areas with very low average income. you can see the very drastic difference between the most targeted blocks and least target blocks of households. households that have high diversity of wealth often see similar amounts of patrols regardless of any circumstance but blocks with disproportionately large amount of poor residence see such a jump up in targeting from the PredPol.&lt;/p&gt;

&lt;p&gt;to further cement this point they further looked into the data and were able to find data on public housing and how this factored into the predictions targeting their areas. Some findings of this part of the study were that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Jacksonville, 63 percent of public housing was located in the block groups PredPol targeted the most.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Elgin, 58 percent of public housing was located in the block groups PredPol targeted the most.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Portage; Livermore, Calif.; Cocoa, Fla.; South Jordan, Utah; Gloucester, N.J.; and Piscataway, every single public housing facility was located in block groups that were targeted the most.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;as you can clearly see areas which involve people without the funds to buy their own properity were highly over represented within the PredPol predictions so they were heavily targeted just like how people who are black or latino are heavily targeted from this system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;obviously a lot of the stuff talked about in this paper is seen as common sense to certain groups, but that only is because those groups might have had first hand experience with these troubles with police, being able to take data from PredPol which is a data collection agency themselves just shows how much data reveals truths, as it was eventually leaked we can point to real life companies data to show what a lot of people already had preconceptions about cops.&lt;/p&gt;

&lt;p&gt;this paper was really informative to me because Ive always been interested in figuring out very broad problems in our country today and learning how to take data and present them plainly to others will be a huge boost in what i want to do in the future for government work.&lt;/p&gt;

&lt;p&gt;if i were to try and better explain this paper in plain words to either a business stakeholder who has interests in human resources or even a politician who wants to see change in our country like i do this paper would be extraordinary in pointing to facts when deciding policies to help others.&lt;br&gt;
in conclusion i would recommend people read the whole paper as it is too much to explain in a short blog but i hope i was able to represent at least a little of their points well.&lt;/p&gt;

&lt;p&gt;The paper i referenced: &lt;a href="https://themarkup.org/show-your-work/2021/12/02/how-we-determined-crime-prediction-software-disproportionately-targeted-low-income-black-and-latino-neighborhoods#2021-predpol-methodology_race-percentile" rel="noopener noreferrer"&gt;https://themarkup.org/show-your-work/2021/12/02/how-we-determined-crime-prediction-software-disproportionately-targeted-low-income-black-and-latino-neighborhoods#2021-predpol-methodology_race-percentile&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>beginners</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Learning more Keras</title>
      <dc:creator>JohnSheehan8</dc:creator>
      <pubDate>Tue, 05 Jul 2022 04:18:49 +0000</pubDate>
      <link>https://dev.to/johnsheehan8/learning-more-keras-2g3i</link>
      <guid>https://dev.to/johnsheehan8/learning-more-keras-2g3i</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;Expanding on the blog i did last week i wanted to talk more about how you can explore and learn Keras. i was able to find a very simple data set to explore Keras on and followed a few tutorials to get my feet wet with this sort of programming before i delved further into machine learning. To Start us off i imported the very basics i needed to run a Keras program. The data set i was working with was on Pima Indians with diabetes dataset, i chose this because it had clear a clear set of categories i could develop and they were very short and simple.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Number of times pregnant&lt;/li&gt;
&lt;li&gt;Plasma glucose concentration a 2 hours in an oral glucose &lt;/li&gt;
&lt;li&gt;tolerance test&lt;/li&gt;
&lt;li&gt;Diastolic blood pressure (mm Hg)&lt;/li&gt;
&lt;li&gt;Triceps skin fold thickness (mm)&lt;/li&gt;
&lt;li&gt;2-Hour serum insulin (mu U/ml)&lt;/li&gt;
&lt;li&gt;Body mass index (weight in kg/(height in m)^2)&lt;/li&gt;
&lt;li&gt;Diabetes pedigree function&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Age (years)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Class variable (0 or 1)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the y variable whether a patient had diabetes or not&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Imports
&lt;/h1&gt;

&lt;p&gt;i imported the basic libraries i needed&lt;/p&gt;

&lt;p&gt;&lt;code&gt;from numpy import loadtxt&lt;br&gt;
from tensorflow.keras.models import Sequential&lt;br&gt;
from tensorflow.keras.layers import Dense&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Load the dataset
&lt;/h1&gt;

&lt;p&gt;i then loaded the data set that i was working with&lt;/p&gt;

&lt;p&gt;&lt;code&gt;dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  split into input (X) and output (y) variables
&lt;/h1&gt;

&lt;p&gt;X = dataset[:,0:8]&lt;br&gt;
y = dataset[:,8]&lt;/p&gt;

&lt;h1&gt;
  
  
  Define the keras model
&lt;/h1&gt;

&lt;p&gt;The first hidden layer has 12 nodes and uses the relu activation function. (Rectified Linear Unit)&lt;br&gt;
The second hidden layer has 8 nodes and uses the relu activation function.&lt;br&gt;
The output layer has one node and uses the sigmoid activation function.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;model = Sequential()&lt;br&gt;
model.add(Dense(12, input_shape=(8,), activation='relu'))&lt;br&gt;
model.add(Dense(8, activation='relu'))&lt;br&gt;
model.add(Dense(1, activation='sigmoid'))&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Compile the keras model
&lt;/h1&gt;

&lt;p&gt;loss =  binary_crossentropy&lt;br&gt;
     * Computes the cross-entropy loss between true labels and predicted labels.&lt;/p&gt;

&lt;p&gt;optimizer = adam&lt;br&gt;
     * adam is a popular version of gradient descent because it automatically tunes itself and gives good results in a wide range of problems.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;other metrics to look at: &lt;a href="https://keras.io/api/metrics/"&gt;https://keras.io/api/metrics/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Fit the keras model on the dataset
&lt;/h1&gt;

&lt;p&gt;epoch = how many times the data set is passed into the AI&lt;br&gt;
batch_size = The batch size defines the number of samples that will be propagated through the network&lt;/p&gt;

&lt;p&gt;&lt;code&gt;model.fit(X, y, epochs=150, batch_size=10)&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Evaluate the keras model
&lt;/h1&gt;

&lt;p&gt;you can evaluate your model on your training dataset using the evaluate() function&lt;/p&gt;

&lt;p&gt;&lt;code&gt;accuracy = model.evaluate(X, y)&lt;br&gt;
print('Accuracy: %.2f' % (accuracy*100))&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Results
&lt;/h1&gt;

&lt;p&gt;768/768 [==============================] - 0s 63us/step - loss: 0.4817 - acc: 0.7708&lt;br&gt;
Epoch 147/150&lt;br&gt;
768/768 [==============================] - 0s 63us/step - loss: 0.4764 - acc: 0.7747&lt;br&gt;
Epoch 148/150&lt;br&gt;
768/768 [==============================] - 0s 63us/step - loss: 0.4737 - acc: 0.7682&lt;br&gt;
Epoch 149/150&lt;br&gt;
768/768 [==============================] - 0s 64us/step - loss: 0.4730 - acc: 0.7747&lt;br&gt;
Epoch 150/150&lt;br&gt;
768/768 [==============================] - 0s 63us/step - loss: 0.4754 - acc: 0.7799&lt;br&gt;
768/768 [==============================] - 0s 38us/step&lt;br&gt;
Accuracy: 76.56&lt;/p&gt;

&lt;h1&gt;
  
  
  Predictions
&lt;/h1&gt;

&lt;p&gt;To find predictions based off of our model we can use the predict() function to predict how the model will go moving forward. &lt;br&gt;
...&lt;br&gt;
&lt;code&gt;predictions = model.predict(X)&lt;/code&gt;&lt;br&gt;
round predictions &lt;br&gt;
&lt;code&gt;rounded = [round(x[0]) for x in predictions]&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Prediction Results
&lt;/h1&gt;

&lt;p&gt;Number of times pregnant =&amp;gt; 0 (expected 1)&lt;br&gt;
Plasma glucose concentration a 2 hours in an oral glucose  =&amp;gt; 0 (expected 0)&lt;br&gt;
tolerance test =&amp;gt; 1 (expected 1)&lt;br&gt;
Diastolic blood pressure (mm Hg) =&amp;gt; 0 (expected 0)&lt;br&gt;
Triceps skin fold thickness (mm) =&amp;gt; 1 (expected 1)&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In conclusion i am still learning and i am not proficient in Keras and i Don't exactly know if all of my data was perfect but i wanted to do a rough tutorial on how to filter in data within Keras to show how easy it is to create models and predictions using this library for machine learning. the main take away i want people to read this short tutorial is that the keras library takes a lot of the convoluted coding and simplifies it even within tensorflow and allows for a much easier time taking data and creating models from the data.&lt;/p&gt;

</description>
      <category>keras</category>
      <category>tensorflow</category>
      <category>python</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Learning Deep Learning</title>
      <dc:creator>JohnSheehan8</dc:creator>
      <pubDate>Tue, 14 Jun 2022 06:21:40 +0000</pubDate>
      <link>https://dev.to/johnsheehan8/learning-deep-learning-33hi</link>
      <guid>https://dev.to/johnsheehan8/learning-deep-learning-33hi</guid>
      <description>&lt;p&gt;During my data science learning journey i have been really interested in ways to streamline complicated code and make tasks that seems extremely hard more basic and easy for the non-tech savvy people to understand. To find those solutions i looked into different data science libraries and tried learning what they can help with. The library that really caught my eye ate first was TensorFlow, this library was made to handle extremely high performance computations, optimize machine learning programs and  have tools for really fancy graph visualizations, of course the biggest thing i noticed was that it was backed by google.&lt;br&gt;
Since TensorFlow is an open-source machine learning program its purpose is to give people the freedom to create really complex projects with more organized methods.&lt;/p&gt;

&lt;p&gt;TensorFlow is becoming a commonly used library especially in the big tech companies as it is the base for a lot of machine learning that we encounter. &lt;/p&gt;

&lt;p&gt;As i researched further into TensorFlow i found out that there is an extension where it adds a further level of depth within machine learning with a deep learning API called Keras. Keras is the solution to my initial search, the purpose of Keras is to streamline TensorFlow into very digestible language for people who are not as adept at coding and data science yet. It creates a wide variety of errors that clearly spell out issues with users code, It has systems within the library that can learn along with you, and they have created functions that are extremely condensed and easy to use. Keras is becoming so wildly popular in today's society that there have been over 250k applicants for the program and companies such as Apple, YouTube, and NASA.&lt;/p&gt;

&lt;p&gt;At the end of this blog i added links to some of the really neat inventions that have come out recently from users who have created projects using Keras as well as good tutorials on how to properly get into Keras that i found fascinating. Two of the projects that caught my eye when researching what you could do with this Library is that someone created an AI that can detect if people in videos or pictures are wearing masks and you can filter through thousands of photos at once with this technology, and the other was a really good innovation within the medical field where data scientist created a system for detection of what specific heart condition one might be suffering and which conditions are happening more often.&lt;/p&gt;

&lt;p&gt;Both Keras and TensorFlow are extremely important innovations in machine learning and a lot of the tech world has started to adopt this library as a tool for their data. As i learn more about machine learning in the next couple of weeks i might be interested in diving deeper into learning Keras and what it specifically can do for me.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZI80yBh---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ln9498os6zfd40scl74u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZI80yBh---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ln9498os6zfd40scl74u.jpg" alt="Image description" width="822" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/6_2hzRopPbQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://intellipaat.com/blog/keras-tutorial/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--plO8Rz6g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://intellipaat.com/blog/wp-content/uploads/2020/06/Keras-tutorial_Big.jpg" height="280" class="m-0" width="822"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://intellipaat.com/blog/keras-tutorial/" rel="noopener noreferrer" class="c-link"&gt;
          Keras Tutorial - Beginners Guide to Deep Learning in Python
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Keras Tutorial for Beginners: This learning guide provides a list of topics like what is Keras, its installation, layers, deep learning with Keras in python, and applications.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://res.cloudinary.com/practicaldev/image/fetch/s--x83fqQkw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://intellipaat.com/blog/wp-content/themes/intellipaat-blog-new/images/favicon1.png" width="65" height="82"&gt;
        intellipaat.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Neat Keras Projects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Ax6P93r32KU"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kaggle.com/code/mychen76/heart-disease-classification-with-keras/notebookwith-keras/notebook"&gt;https://www.kaggle.com/code/mychen76/heart-disease-classification-with-keras/notebookwith-keras/notebook&lt;/a&gt; - The code for the heart disease classification discovery&lt;/p&gt;

</description>
      <category>keras</category>
      <category>tensorflow</category>
      <category>python</category>
      <category>programming</category>
    </item>
    <item>
      <title>My Experience With Data Science so far</title>
      <dc:creator>JohnSheehan8</dc:creator>
      <pubDate>Tue, 24 May 2022 08:54:02 +0000</pubDate>
      <link>https://dev.to/johnsheehan8/my-experience-with-data-science-so-far-1pfd</link>
      <guid>https://dev.to/johnsheehan8/my-experience-with-data-science-so-far-1pfd</guid>
      <description>&lt;p&gt;When beginning my educational career after high school i got really into politics, this was around the 2016 election and that sort of field really interested me. I was able to attend the University of Maryland getting a government and politics bachelors degree and really enjoy my time there. This is where an issue arose, i graduated right as the Coronavirus outbreak began and most of the jobs in my field i was looking into were no longer hiring due to restrictions, and jobs in that field are not in high demand in the first place. Thus began my year and a half spiral of mental health problems and frustrations with the fact that it was so hard to move on with my life.&lt;/p&gt;

&lt;p&gt;Eventually i started looking into masters programs and one that caught my eye was data science within government at UMD. My initial thought was that i might as well get a full and deeper understanding of data science and not pigeon hold myself to just government work. My brother attended Flatiron 2 years ago in the software engineering program and loved it so it was a no-brainer for me to attend Flatiron if i wanted to learn data science.&lt;/p&gt;

&lt;p&gt;What i am most looking forward to learning in this course is AI programming and how i can set up a system to figure out data for any interest i am pursuing, whether that be politics, video games, sports, or anything else that catches my eye.&lt;/p&gt;

&lt;p&gt;My hope is that by taking this course in Flatiron i will become a better person, find a job that truly interests me and hopefully i will become a good data scientist.&lt;/p&gt;

</description>
      <category>python</category>
    </item>
  </channel>
</rss>
