<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alin Rauta</title>
    <description>The latest articles on DEV Community by Alin Rauta (@rautaalin).</description>
    <link>https://dev.to/rautaalin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rautaalin"/>
    <language>en</language>
    <item>
      <title>How to get rid of FOMO as a self-taught programmer</title>
      <dc:creator>Alin Rauta</dc:creator>
      <pubDate>Wed, 19 Feb 2020 14:41:13 +0000</pubDate>
      <link>https://dev.to/rautaalin/how-to-get-rid-of-fomo-as-a-self-taught-programmer-36np</link>
      <guid>https://dev.to/rautaalin/how-to-get-rid-of-fomo-as-a-self-taught-programmer-36np</guid>
      <description>&lt;p&gt;This article was originally published back in 2016 and it's based on my own experience as a self-taught programmer.&lt;/p&gt;

&lt;p&gt;First of all, let’s see what the heck is FOMO. It means “fear of missing out” and it manifested in my learning process. For me, FOMO was that thing I did not know I had until I read about it. It was more like an epiphany.&lt;/p&gt;

&lt;p&gt;Being a self-starter in programming, I was very easy to influence in the beginning regarding the “what to learn” part. I started by learning the basics of HTML, CSS and Javascript, but in the same time I wanted to be up to date with all the latest programming trends.&lt;/p&gt;

&lt;p&gt;So, reading all kinds of blog posts about cool and hot frameworks (I’ve not heard of) it felt overwhelming and like I had so much to catch up with. The more I read the more I would find out about a new shiny framework and the more I realised how little I knew, so the more I felt to read about it. It was a vicious cycle.&lt;/p&gt;

&lt;p&gt;Instead of programming as many hours as I could, I became a professional framework and resource “bookmarker”, which is the wrong way to do it.&lt;/p&gt;

&lt;p&gt;When you are a beginner, the world of programming it looks to you like a very tall mountain to climb with all those frameworks and resources available and it can get to you, so here you have 5 suggestions that can help you to get rid of FOMO:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Learn the underlying programming language/technology first&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In my opinion, the best way to start is by learning the core elements of the underlying programming language of the framework.&lt;/p&gt;

&lt;p&gt;Learn Javascript first and then go learning Angular JS, React JS or other hot front-end JS framework. If you learn a framework and not the programming language the framework is based upon you will learn it harder and if that framework will suddenly go out of fashion you will still have to learn the underlying programming language.&lt;/p&gt;

&lt;p&gt;So, frameworks will go out of fashion quicker than programming languages, which is not quite obvious for a beginner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Give yourself a break, you’ll have all the time in the world to learn all those hyped frameworks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It’s ok that in one week you learned just the basics of CSS and you still don’t know to write CSS the SASS way and you didn’t use font awesome and animate.css. Oh, and let’s not forget about hover.css.&lt;/p&gt;

&lt;p&gt;Take it easy and give yourself a pat on the back for what you managed to learn, it’s still better than nothing, right? Try to celebrate your progress no matter how small it seems to you because after all, Rome wasn’t built in one day. It takes time and consistency to learn programming, so don’t forget that everyone has his/her share of unproductive days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. What is cool today it won’t necessarily be cool tomorrow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OK, let’s say you already know the basics of the programming language so now you want to learn a shiny hot framework, but there are so many (click here for a great article about framework fatigue), so which one to choose?&lt;/p&gt;

&lt;p&gt;I’m afraid there is no right answer here — you’ll have to figure it by yourself, but you could use some tips in doing so. I think it’s a good idea to go open source because if the community will find value in the framework then it will thrive and survive.&lt;/p&gt;

&lt;p&gt;Also, you can try an MVC-oriented framework (google it) like Angular JS and something that gives more freedom like React JS. You’re just in the beginning, so go out there and explore. Don’t try to specialise yet, you need to have a bigger picture before you can do that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Make a plan and stick to it (don’t take it literally, though)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A better way to learn is to make a plan, so that you know specifically what you want to learn and stick to the DAMN plan! Not just because you read today about another-bs-framework.js you’re going to spend 4 hours trying to learn it. In this manner will know a bit from a lot of areas which mostly equates with knowing nothing.&lt;/p&gt;

&lt;p&gt;But, it’s dangerous to take this step too literally. I mean you’re a beginner so the initial plan might suck and you will find that along the way, so sometimes you have to adapt while you’re learning. And that’s why you need to take into account the next step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Measure your progress&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Someone said that if you can’t measure it then it doesn’t exist. So, the wrong way to do it: “I am planning to learn CSS this week”.&lt;/p&gt;

&lt;p&gt;The better way: “I am planning to learn the CSS selectors on Monday, CSS classes on Tuesday, …so on and so forth, you get it.”&lt;/p&gt;

&lt;p&gt;Be specific in your leaning goals because it will help you to dig deeper in what you’re learning, seeing the ins and outs and to have a better big picture of it.&lt;/p&gt;

&lt;p&gt;To be sure you use all the stuff you (thought that you) learned make a fun project, so that you can test yourself and get more practice.&lt;/p&gt;

&lt;p&gt;You can even give yourself an award if you can stick to the plan and you get that final project part.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;If you liked this article and want to see more of these, then follow me on &lt;a href="https://twitter.com/RautaAlin"&gt;twitter&lt;/a&gt; and on &lt;a href="https://dev.to/rautaalin"&gt;dev.to&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>codenewbie</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>The Myths Of Learning To Code</title>
      <dc:creator>Alin Rauta</dc:creator>
      <pubDate>Fri, 14 Feb 2020 16:19:13 +0000</pubDate>
      <link>https://dev.to/rautaalin/the-myths-of-learning-to-code-59f0</link>
      <guid>https://dev.to/rautaalin/the-myths-of-learning-to-code-59f0</guid>
      <description>&lt;p&gt;This article was originally published back in 2016 and it's based on my own experience as a self-taught programmer.&lt;/p&gt;

&lt;p&gt;One of the reasons I decided to learn how to code was all the hype from the social media that simply gave me the impression of how easy is going to be.&lt;/p&gt;

&lt;p&gt;I would read all sorts of articles about the software industry and I felt that it seemed to me kind of easy to become a programmer as I already had a background in math.&lt;/p&gt;

&lt;p&gt;I was wrong, it was not easy and I made some rookie mistakes that could have been avoided and I hope you will avoid after reading this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth #1: You can monetise your basic programming skills very early&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When I quit my job I had money to live with for a couple of months. “I will monetise my newly programming skills on freelancer.com very quickly”, I thought. Boy, I was wrong!&lt;/p&gt;

&lt;p&gt;First of all, freelancer.com is not the place to be for a newbie self-taught programmer because it is a very competitive market place. I mean, you have nothing to show for and you have to build your portfolio which is not such a fast process, that’s why you are a beginner in the first place, right?.&lt;/p&gt;

&lt;p&gt;So, like other industries, software is no different. You have to take some time to learn very well the basics before you can get paid for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth #2: You will quickly find a programming job because is huge demand in the market&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I was very confident that companies will rip off each other in order to hire me because they need programers so badly. Not quite true as I found out the cruel truth by applying to ~ 40 companies before someone called me for an interview.&lt;/p&gt;

&lt;p&gt;I realised there is demand, indeed, but not for juniors or entry-entry-entry-level programmers. 90% of the job posts were starting with “We’re looking for an — insert programming job here — with at least 2–3 years of experience”. The best shot you have as a self-starter is at an internship in a medium or big company or at some very junior position in a small company — these were the trends I have observed, but is does not have to be necessarily this way.&lt;/p&gt;

&lt;p&gt;My strategy was to build some of the front-end projects from the FreeCodeCamp challenges to have a mini portfolio and draw some attention at my resume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth #3: Having a degree in CS is not very important&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I used to think that is not so important if you have a degree in Computer Science (CS), but what you built is the most important thing for an employer. From my experience this view is only half true. There still are a lot of companies that seek only CS grads and you have a small chance of being called for an interview.&lt;/p&gt;

&lt;p&gt;One day, an HR company I have been in touch with called to asked me if I would like to send my resume to a german company. I said: “hell yeah!”. I did not hear from them for a week, so I called them to see what’s happened. They told me the german company did not take into account my resume because I hadn’t a CS degree and they look only for CS grads.&lt;/p&gt;

&lt;p&gt;Now, it depends what type of job you really want. If you aim for a software engineer position I would suggest to get some online credentials because it helps a lot, but if you want to be more of a web/mobile developer, your portfolio speaks by itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth #4: After a couple of tutorials you’ll be up and running&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Do you know why companies spend so much on marketing? Because it works! The same thing applies to all those articles and blog posts on “how easy you can learn to code by getting through a couple of tutorials”. It is not quite true because first of all, it depends a lot on how you learn best.&lt;/p&gt;

&lt;p&gt;There are some people that learn by brute force — googling “how do you do x” and maybe watching a short video on youtube and then starting to code — whereas others really like those long video tutorials. I am somewhere in the middle. I like to watch some videos to get an idea, then starting to code and when something is not working I will go directly on stackoverflow.&lt;/p&gt;

&lt;p&gt;There is no best way of learning to code, there is just everyone’s way and you have to find yours. I warn you, you’ll need a couple of weeks to shape your learning style and to see what works best for you. Whatever happens, don’t quit because consistency is key in learning something new.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth #5: Learning programming alone is not so bad&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I have always been an auto didactic person and I learned by myself everything that interested me or triggered my curiosity. I thought that coding will not be too for from that. I was wrong.&lt;/p&gt;

&lt;p&gt;Coding is pretty tough to learn in the beginning because you can get stuck at something for a day and still not being able to solve it, which at the end of the day it weighs down on you and in a couple of weeks can get a toll on you. I was lucky to find out about FreeCodeCamp, but still I wanted to have face to face conversation with someone who’s in the same situation as myself and to share our pains or some tricks we learned along the way.&lt;/p&gt;

&lt;p&gt;To sum up, learning how to program on my own was not the most romantic experience, but that does not mean I did not enjoy the hell out of it. My main message is: learning how to program is harder than you think, but is totally doable. So, keep up the good work and don’t give up because the day you give up is the day before you succeed.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;If you liked this article and want to see more of these, then follow me on &lt;a href="https://twitter.com/RautaAlin"&gt;twitter&lt;/a&gt; and on &lt;a href="https://dev.to/rautaalin"&gt;dev.to&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>codenewbie</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>A brief summary on the history of AI: Part 2</title>
      <dc:creator>Alin Rauta</dc:creator>
      <pubDate>Tue, 11 Feb 2020 12:20:13 +0000</pubDate>
      <link>https://dev.to/rautaalin/a-brief-summary-on-the-history-of-ai-part-2-3ob</link>
      <guid>https://dev.to/rautaalin/a-brief-summary-on-the-history-of-ai-part-2-3ob</guid>
      <description>&lt;p&gt;This article was originally published on my &lt;a href="https://www.alinrauta.com/"&gt;blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here we go again (1980-1987)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the 1980s a form of AI program called “expert systems” was successfully adopted by corporations around the world. By 1986, an expert system was saving a company an estimated $40 million a year. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. followed by the United States who formed the Microelectronics and Computer Technology Corporation (MCC) as a research consortium designed to assure national competitiveness. Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988, including hundreds of companies building expert systems, vision systems, robots, and software and hardware specialized for these purposes.&lt;/p&gt;

&lt;p&gt;Expert systems refer to programs that are able to solve problems about a specific domain of knowledge using logical rules which are derived from the knowledge of experts. In the same period we are witnessing a “knowledge revolution” as AI researchers were beginning to believe that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways.&lt;/p&gt;

&lt;p&gt;Another beneficial event for AI was the return of neural networks. The proof that neural networks can learn information in a new way (by John Hopfield) and the popularisation of “backpropagation” managed to revive the field of connectionism (artificial neural networks).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The second AI winter (1987-1993)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately,we have another “AI Winter” because many companies fell by the wayside as they failed to deliver on extravagant promises. In the late 1980s and early 1990s, AI suffered a series of financial setbacks. The expert systems proved to be too expensive to maintain and be useful just in a few special contexts. Japan’s fifth generation project didn’t achieve its ambitious goals whereas the United States cut funding to AI.&lt;/p&gt;

&lt;p&gt;In the same period, a new approach to AI emerged. A group of researchers believed that having a body is primordial for an intelligent machine. It has to move, perceive and deal with the real world. The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 1970s from a successful background in theoretical neuroscience to lead the group studying computer vision.&lt;/p&gt;

&lt;p&gt;In terms of methodology, AI adopts the scientific method. To be accepted, hypotheses must be subjected to rigorous empirical experiments, and the results must be analyzed statistically for their importance. A relevant example is the field of speech recognition. In the 1970s, a wide variety of different architectures and approaches were tried. Many of these were rather ad hoc and fragile, and were demonstrated on only a few specially selected examples. In recent years, approaches based on hidden Markov models (HMMs) have come to dominate the area.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It starts to take shape (1993-2011)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The field of AI is starting to be used successfully throughout the technology industry, but somehow behind the scenes (after two AI winters caused by unrealistic high expectations, people began to be more prudent). Some of the success was due to increasing computer power and some was achieved by using the scientific method when doing research. AI was fragmented into competing subfields focused on particular problems or approaches. AI was both more cautious and more successful than it had ever been.&lt;/p&gt;

&lt;p&gt;On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. Moreover, in 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.&lt;/p&gt;

&lt;p&gt;During the 1990s we see the emergence of intelligent agents, which is a system that perceives its environment and takes actions which maximize its chances of success. Despite the successes of intelligent agents, some influential founders of AI, including John McCarthy, Marvin Minsky, Nils Nilsson and Patrick Winston have expressed discontent with the progress of AI. They think that AI should put less emphasis on creating ever-improved versions of applications that are good at a specific task, such as driving a car, playing chess, or recognizing speech. Instead, they believe AI should return to its roots of striving for, in Simon’s words, “machines that think, that learn and that create.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A new era of AI (2011-present)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first decades of the 21st century brought us access to large amounts of data (known as “big data”), cheaper and faster computers and advanced machine learning techniques. One of these factors has an influential role on how AI changed its focus: data. Throughout the 60-year history of computer science, the emphasis has been on the algorithm as the main subject of study. But some recent work in AI suggests that for many problems, it makes more sense to worry about the data and be less picky about what algorithm to apply. That’s how important data has become and why right now is probably the most sought after resource in the world.&lt;/p&gt;

&lt;p&gt;AI today is present in many real world applications such as robotic vehicles, speech recognition, self-driving cars, spam filters, social media algorithms that govern your feed, robotics. These are just a few examples, not an exhaustive list.&lt;/p&gt;

&lt;p&gt;The sources used for this article:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;One of the most influential textbook in AI: &lt;a href="https://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597"&gt;Artificial Intelligence, A Modern Approach (Stuart Russel, Peter Norvig)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/History_of_artificial_intelligence"&gt;Wikipedia&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you liked this article and want to see more of these, then follow me on &lt;a href="https://twitter.com/RautaAlin"&gt;twitter&lt;/a&gt; and on &lt;a href="https://dev.to/rautaalin"&gt;dev.to&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;P.S. I’m an indie maker and I’m writing a book on the basics of AI. If you want to support me and you’re interested in AI, then you can pre-order my book at a discount here (you won’t get charged until I finish the book): &lt;a href="https://gumroad.com/l/SXpw/sideproject"&gt;https://gumroad.com/l/SXpw/sideproject&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Learning how to program from scratch</title>
      <dc:creator>Alin Rauta</dc:creator>
      <pubDate>Thu, 06 Feb 2020 15:42:01 +0000</pubDate>
      <link>https://dev.to/rautaalin/learning-how-to-program-from-scratch-3jpi</link>
      <guid>https://dev.to/rautaalin/learning-how-to-program-from-scratch-3jpi</guid>
      <description>&lt;p&gt;This article was originally published back in 2016 and it's based on my own experience as a self-taught programmer.&lt;/p&gt;

&lt;p&gt;I was thinking for quite a time to learn how to code after I realised that ideas are cheap and I must learn how to put them into practice. The only way to do it is by learning how to program, or to have a Steve Wozniak as your best friend, not my case though.&lt;/p&gt;

&lt;p&gt;So, my rationale was the following, by learning how to code I can found my tech startup (yep, I was in the startup fever mode) and if that plan fails I can get employed for an above average salary as a programmer. A pretty good plan, isn’t it? Now let’s march forward and tell you my expectations and how the reality turned out to be infinitely harsher than I thought.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expectation: Getting into coding without analyzing too much is just fine&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reality: Precious time is wasted and you get even more confused&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I started my coding journey with the idea of learning Swift — the new and shiny programming language of Apple — and making mobile apps for the App Store to make a few bucks. Why would I do that? Because I have read a couple of articles from tech crunch and it seemed to me that is quite a good opportunity (not quite, after all) and how hard can it be to make to make an app for iPhones and iPads? As it turned out, not so simple as I thought.&lt;/p&gt;

&lt;p&gt;The mistake I made was to start with Swift as my first programming language just because I thought is easy to make mobile apps and also to make some profit out of it. I didn’t make any research to know the options out there in the programming field. So, after a month I gave up on Swift and I started a course on Udemy on web development, which was the starting point for getting interested in programming for the web.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Force Awakens&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I began to read more and more articles, blog posts, opinions on what to learn and not to learn as a beginner and slowly a big picture started to be drawn inside of my mind.&lt;/p&gt;

&lt;p&gt;Mobile:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apple: Objective-C or Swift&lt;/li&gt;
&lt;li&gt;Android: Java&lt;/li&gt;
&lt;li&gt;Microsoft: .NET&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Web:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTML &amp;amp; CSS + Java/Javascript/Python/.NET/PHP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Maybe the best option could have been to learn Java because it’s such an all around programming language, but I didn’t like the Android platform (still don’t) and just mobile-oriented seemed to narrow to pursue it, so I thought that choosing the web would be the better option for me. I could make a responsive website and work on a desktop, tablet and a smartphone as well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Decision&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, Javascript seemed the obvious option (it’s one of the essential technology of internet content production, duh!?) because it’s kind of easier than let’s say Java and you can build something basic really fast, which matters a lot in the beginning because you want to see the concrete results of your work as soon as possible.&lt;/p&gt;

&lt;p&gt;Also, this has a great psychologic impact on beginners and self-starters because it gives you the believe that it’s possible to start from scratch and it’s not just your crazy optimism at work.&lt;/p&gt;

&lt;p&gt;Of course, at the time I started to learn JS I didn’t know that you can even build a web app with both client-side and server-side written in JS, so in hindsight it’s the best decision I could have made. Since then, I focused on learning JS and the MEAN stack, especially the client-side part of it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lessons Learned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To wrap it up, if I could improve something at the way I started my coding journey would be the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Research more on what can you do and build as a programmer&lt;/li&gt;
&lt;li&gt;Focus on what you really want to build and create&lt;/li&gt;
&lt;li&gt;Focus on the tools which you can use to give life to your ideas&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;If you liked this article and want to see more of these, then follow me on &lt;a href="https://twitter.com/RautaAlin"&gt;twitter&lt;/a&gt; and on &lt;a href="https://dev.to/rautaalin"&gt;dev.to&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>codenewbie</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>A brief summary on the history of AI: Part 1</title>
      <dc:creator>Alin Rauta</dc:creator>
      <pubDate>Mon, 03 Feb 2020 15:58:57 +0000</pubDate>
      <link>https://dev.to/rautaalin/a-brief-summary-on-the-history-of-ai-part-1-1b8h</link>
      <guid>https://dev.to/rautaalin/a-brief-summary-on-the-history-of-ai-part-1-1b8h</guid>
      <description>&lt;p&gt;This article was originally published on my &lt;a href="https://www.alinrauta.com/"&gt;blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The genesis of artificial intelligence (1943–1955):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first seeds of AI is generally recognized to be done by Warren McCulloch and Walter Pitts in 1943. They came up with a model of artificial neurons where each neuron has an “on” or “off” state with a switch to “on” occurring in response to stimulation by a sufficient number of neighboring neurons.&lt;/p&gt;

&lt;p&gt;They showed that any computable function could be computed by some network of connected neurons and that all the logical connectives (and, or, not, etc.) could be implemented by simple net structures. McCulloch and Pitts also suggested that networks could learn.&lt;/p&gt;

&lt;p&gt;In 1949 Donald Hebb demonstrated a simple updating rule for modifying the connection strengths between neurons. His rule, now called Hebbian learning, remains an influential model to this day.&lt;/p&gt;

&lt;p&gt;In 1950, two Harvard undergraduate students (Marvin Minsky and Dean Edmonds) built the first neural network computer and was able to simulate a network of 40 neurons.&lt;/p&gt;

&lt;p&gt;There were a number of early examples of work that can be characterized as AI, but Alan Turing’s vision was perhaps the most influential. He gave lectures on the topic as early as 1947 at the London Mathematical Society. In his 1950 article “Computing Machinery and Intelligence” he introduced the Turing Test, machine learning, genetic algorithms and reinforcement learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The birth of artificial intelligence (1956)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the summer of 1956 it was organized a two-month workshop at Dartmouth that 10 men attended. This is in their own words the scope of the workshop:&lt;/p&gt;

&lt;p&gt;“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”&lt;/p&gt;

&lt;p&gt;They agreed to name this new field “Artificial Intelligence” and this is where officially all began. The Dartmouth workshop did not lead to any new breakthroughs, but it did introduce all the major figures to each other that for the next decades will dominate the field along with their students and colleagues at MIT, CMU, Stanford and IBM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The golden period (1956–1974)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The years after the Dartmouth conference were an era of discovery, of sprinting across new ground and successes. Given the limited computers and programming tools of the time the programs developed were simply astonishing. Computers solving algebra problems, proving theorems in geometry and learning to speak English was something unreal for most of the people during that time.&lt;/p&gt;

&lt;p&gt;The most influential and successful directions AI research took in that period are reasoning as search, natural language and micro-worlds.&lt;/p&gt;

&lt;p&gt;All these accomplishments made researchers to express an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years. (yep, we know now that they overestimate the task at hand).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The first AI winter (1974-1980)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The tremendous optimism expressed by the AI researchers created high expectations. Their failure in appreciating how hard were the problems they faced led to a wave of disappointment and the loss of funding - something that today we call it “AI winter”. Thus, the rate of innovation and progress in AI stagnates in this period.&lt;/p&gt;

&lt;p&gt;Now, what were the problems that led to this AI winter? In essence, they are the same factors that today make AI fast and useful.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Limited computer power&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At the time there was not enough memory and processing power to utilise AI in anything useful. It was a mere toy that could do its job only in trivial and simple situations. For instance, natural language was demonstrated with a vocabulary of only 20 words, because that was all that would fit in memory.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Not enough data&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Many important artificial intelligence applications (like computer vision or natural language) require simply enormous amounts of information about the world, which was not available at that time. No one in 1970 could build a database so large and no one knew how a program might learn so much information.&lt;/p&gt;

&lt;p&gt;To be continued…&lt;/p&gt;

&lt;p&gt;The sources used for this article:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;One of the most influential textbook in AI: &lt;a href="https://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597"&gt;Artificial Intelligence, A Modern Approach (Stuart Russel, Peter Norvig)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/History_of_artificial_intelligence"&gt;Wikipedia&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you liked this article and want to see more of these, then follow me on &lt;a href="https://twitter.com/RautaAlin"&gt;twitter&lt;/a&gt; and on &lt;a href="https://dev.to/rautaalin"&gt;dev.to&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;P.S. I’m an indie maker and I’m writing a book on the basics of AI. If you want to support me and you’re interested in AI, then you can pre-order my book at a discount here (you won’t get charged until I finish the book): &lt;a href="https://gumroad.com/l/SXpw/sideproject"&gt;https://gumroad.com/l/SXpw/sideproject&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>computervision</category>
    </item>
    <item>
      <title>What is Artificial Intelligence?</title>
      <dc:creator>Alin Rauta</dc:creator>
      <pubDate>Wed, 22 Jan 2020 15:51:23 +0000</pubDate>
      <link>https://dev.to/rautaalin/what-is-artificial-intelligence-4e7c</link>
      <guid>https://dev.to/rautaalin/what-is-artificial-intelligence-4e7c</guid>
      <description>&lt;p&gt;This article was originally published on my &lt;a href="https://www.alinrauta.com/"&gt;blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, let’s take each word of the term individually and try to explain it. Artificial refers to something made by humans (in contrast to nature made stuff) and usually is a copy of something natural. So, artificial is about humans trying to replicate Mother Nature. &lt;/p&gt;

&lt;p&gt;What about the meaning of intelligence? Well, that’s when the fun begins.  We call ourselves “Homo Sapiens”, which in Latin means “wise man” and we like to take pride in being so intelligent in comparison to our fellow peers from the animal kingdom. But what does intelligent really mean? Some of us may think of IQ tests, while others may think of surviving. I propose the following definition: being able to learn and apply what was learned (of course, this is a simplified definition, not a thorough one). &lt;/p&gt;

&lt;p&gt;Putting these together we may say that artificial intelligence is “something made by humans that is able to learn and apply what was learned”. &lt;/p&gt;

&lt;p&gt;This is just a starting point because in real life things are far more complex. The textbook definition of AI is the study of "intelligent agents": any device that perceives its environment and takes actions that maximise its chance of successfully achieving its goals. Truth be told, there is no quite an agreement about an exact definition even among AI researchers. So, I think the best way to understand the meaning of artificial intelligence is to discuss about its pursuits.&lt;/p&gt;

&lt;p&gt;Even though you may have heard about it recently, AI was founded as an academic discipline in 1956 and along the years it has experienced its share of ups and downs (the history of AI will be discussed in a future article). &lt;/p&gt;

&lt;p&gt;The long-term of AI is to eventually carry on any task that a human being can do it. It’s more like reverse engineer our brain and then create an artificial brain that functions the same as our human brain. Since we haven’t yet (fully) discovered how our brain works that’s why this goal is a long-term one. &lt;/p&gt;

&lt;p&gt;In the meantime, let’s discuss about more approachable pursuits of AI that the reader may have already heard of (there are more than those discussed here).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Machine Learning (ML)&lt;/strong&gt;&lt;br&gt;
Tagging people and objects in photos, content recommendation, google search, what about them? They are all examples of machine learning application. The next video to watch showed to you on youtube is based on past data of your online behaviour on youtube. Same thing applies to facebook feed algorithm. The more I click on real estate links, the more facebook will show to me sponsored real estate pages. When you search on google let’s say “java” it can show you the first results about coffee or about the programming language depending on your search history. This is all machine learning. It’s about using data to answer questions, finding and extrapolating patterns. &lt;/p&gt;

&lt;p&gt;Let’s take a numerical example. We have a couple of values for two number and the task is to find the relationship between them (try to find it alone before reading the answer): &lt;/p&gt;

&lt;p&gt;x = 0, 1, 2, 3, 4, 5&lt;br&gt;
y = -1, 1, 3, 5, 7, 9&lt;/p&gt;

&lt;p&gt;Using machine learning we can quickly learn that the relationship between the two numbers is the following: 2x - 1 = y. Basically, machine learning helped us finding a pattern in the numbers (data) we have. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Natural Language Processing (NLP)&lt;/strong&gt;&lt;br&gt;
Siri, Cortana, Alexa, these are prime examples of NLP at play. Natural Language Processing is about machines understanding what you mean when you are saying something, to get the context of the conversation. When you ask Siri to play some rock you are referring to play some rock songs, not to play with a (physical) rock. That’s the challenge of NLP, to make a machine to be able to talk with a human being by understanding what is being discussed and to come up with its own opinions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Robotics&lt;/strong&gt;&lt;br&gt;
Self-driving cars are an example of robotics. A self-driving car takes data from its environment, process that data and makes a decision. Or take for example a smart vacuum cleaner robot. What it really does is to take cues (data) from the environment (your room), process it and decide which way to move. If is continuously bumping into your furniture then that wouldn’t be too intelligent, right? Another example of AI in robotics could be drone delivery. The road from factory to destination is paved with obstacles, so the ability to make the delivery is a matter of understanding the environment and make decisions accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Computer Vision&lt;/strong&gt;&lt;br&gt;
Coffee mugs inspection on a production line is an example of computer vision. It “looks” at a coffee mug and tries to find any evidence of cracks, if none found then the mug is ready to be packed and shipped to customers. Basically, computer vision deals with making a machine to gain understanding from an image or video. That’s what happens when you use Face ID on your iPhone, the phone recognises you and unlocks itself. This is called facial recognition and is widely used in China where cameras are almost everywhere and can track your behaviour on the streets. It has the ability to catch you jaywalking, information which will be used to lower your social score (which is kind of a reputation score).&lt;/p&gt;

&lt;p&gt;AI is comprised of quite a few number of subfields making it a complex and a universal field. To be truly understood you have to see it through its numerous lenses. That’s why is so hard to come up with an encompassing definition. &lt;/p&gt;

&lt;p&gt;However, I will end by giving a shot at defining AI in my own view. For me, Artificial Intelligence is the field that deals with replicating every form of human intelligence with the purpose of creating a machine capable of acting and thinking intelligently.&lt;/p&gt;

&lt;p&gt;I hope you find this article useful and it will get you intrigued about the AI field. If I manage to do that, then my goal is achieved. &lt;/p&gt;

&lt;p&gt;If you liked this article and want to see more of these, then follow me on &lt;a href="https://twitter.com/RautaAlin"&gt;twitter&lt;/a&gt; and on &lt;a href="https://dev.to/rautaalin"&gt;dev.to&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;P.S. I'm an indie maker and I'm writing a book on the basics of AI. If you want to support me and you're interested in AI, then you can pre-order my book at a discount here (you won't get charged until I finish the book): &lt;a href="https://gumroad.com/l/SXpw/sideproject"&gt;https://gumroad.com/l/SXpw/sideproject&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>computervision</category>
      <category>robotics</category>
    </item>
    <item>
      <title>Reviews on top AI free courses that I've taken</title>
      <dc:creator>Alin Rauta</dc:creator>
      <pubDate>Mon, 13 Jan 2020 14:43:30 +0000</pubDate>
      <link>https://dev.to/rautaalin/reviews-on-top-ai-free-courses-that-i-ve-taken-n0b</link>
      <guid>https://dev.to/rautaalin/reviews-on-top-ai-free-courses-that-i-ve-taken-n0b</guid>
      <description>&lt;p&gt;This article was originally published on my &lt;a href="https://www.alinrauta.com/"&gt;blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Last year I've decided to get past the artificial intelligence buzzwords from the media articles and really have a clue about the subject. &lt;/p&gt;

&lt;p&gt;The more research I made the more I got intrigued and interested in AI. It baffled me how much AI will impact our lives and I realised this is the field I want to be in. &lt;/p&gt;

&lt;p&gt;So, I began searching for learning resources and immersed myself into all kinds of AI related material. This was a normal thing to do since I taught myself how to code and I figured that I can also teach myself at least the basic of AI.&lt;/p&gt;

&lt;p&gt;After a few months of taking courses, I will give you my opinion on the most useful free courses I have taken, the ones I'm in progress of finishing and as a bonus the ones I intend to take in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Courses I've taken
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://classroom.udacity.com/courses/cs271"&gt;Intro to Artificial Intelligence&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;About the course&lt;/strong&gt;&lt;br&gt;
It's a classic on AI and it happened to be the first course I've ever taken on the subject. It's a comprehensive course that gives you just the right amount of information about all the branches and sub-branches that AI is made of. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the teachers&lt;/strong&gt; &lt;br&gt;
The course is taught by two of the greatest advocates of AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sebastian Thrun: a former associate professor at Stanford University, co-founder of Udacity, led the team that won the 2005 DARPA Grand Challenge and co-developed Street View at Google.&lt;/li&gt;
&lt;li&gt;Peter Norvig: a director of research at Google and co-author of the leading college text in the field - Artificial Intelligence: A modern Approach&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
I can't recommend it enough. It's definitely a must.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://course.elementsofai.com/"&gt;Elements of AI&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;About the course&lt;/strong&gt;&lt;br&gt;
This is a text based course and the aspect I loved the most about it was the fact that it makes you ponder about the role artificial intelligence is going to have in your life. I like the structure of the course and how quickly you can check if you really understood something by taking a quiz.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the teachers&lt;/strong&gt; &lt;br&gt;
It's created by Reaktor and the University of Helsinki. It's part of an initiative that wants to encourage as broad a group of people as possible to learn about AI. The goal is to make the course available in all EU languages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
It's a quick and engaging course to take to get the very basics on AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.coursera.org/learn/neural-networks-deep-learning?specialization=deep-learning"&gt;Neural Networks and Deep Learning&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;About the course&lt;/strong&gt;&lt;br&gt;
This one is a bit more advanced in terms of knowledge you gain after its completion and it's part of a series of courses on deep learning. I like the fact that it's not getting too technical and you can easily get to understand more advanced nuances tools that are being used in AI, more exactly - deep learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the teachers&lt;/strong&gt; &lt;br&gt;
The course is taught by the one and only Andrew Ng: co-founder of Coursera, Adjunct Professor at Stanford University and an outspoken AI advocate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
It's the kind of course you need to take if you're serious about learning AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.coursera.org/learn/deep-neural-network?specialization=deep-learning"&gt;Improving Deep Neural Networks&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;About the course&lt;/strong&gt;&lt;br&gt;
This is more of a sequel of the previous course and the purpose is to get your knowledge of deep learning one step further. This is where the magic happens in deep learning because it's more of an empirical process (trial and error) and you need to get a deeper (yeah, that's a pun) understanding before you know what parameters to tweak.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the teachers&lt;/strong&gt; &lt;br&gt;
The course is taught by the one and only Andrew Ng: co-founder of Coursera, Adjunct Professor at Stanford University and an outspoken AI advocate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
You really need to take this course if you already had taken the previous one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.coursera.org/learn/introduction-tensorflow"&gt;Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;About the course&lt;/strong&gt;&lt;br&gt;
TensorFlow is an open source platform for machine learning and this course is about teaching you how to use TensorFlow in your AI applications. As a coder I really enjoyed this course because it has less theory and more practice into it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the teachers&lt;/strong&gt; &lt;br&gt;
The course is taught by Laurence Moroney who is an AI advocate at Google and also part of the TensorFlow team. For me, he is one of the best teachers I've ever seen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
It's a friendly course for beginners and with lots of hands-on activities.&lt;/p&gt;

&lt;h2&gt;
  
  
  In Progress
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.coursera.org/learn/convolutional-neural-networks?specialization=deep-learning"&gt;Convolutional Neural Networks&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;About the course&lt;/strong&gt;&lt;br&gt;
This course touches the concept of computer vision and builds on the knowledge acquired in the previous two courses from the &lt;a href="https://www.coursera.org/specializations/deep-learning"&gt;series&lt;/a&gt;. I can't wait to finish it and get more understanding of the computer vision field.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the teachers&lt;/strong&gt; &lt;br&gt;
The course is taught by the one and only Andrew Ng: co-founder of Coursera, Adjunct Professor at Stanford University and an outspoken AI advocate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.coursera.org/learn/convolutional-neural-networks-tensorflow"&gt;Convolutional Neural Networks in TensorFlow&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;About the course&lt;/strong&gt;&lt;br&gt;
This is a sequel of Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning course that I've already taken and things get even more practical in terms of coding which makes it highly appealing for coders.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the teachers&lt;/strong&gt; &lt;br&gt;
The course is taught by Laurence Moroney who is an AI advocate at Google and also part of the TensorFlow team. For me, he is one of the best teachers I've ever seen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.coursera.org/learn/machine-learning"&gt;Machine Learning&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;About the course&lt;/strong&gt;&lt;br&gt;
This is probably the reference course on Machine Learning. It's by far the longest and the most technical one from all the courses I've taken. I believe it's worth the effort of finishing the course if you are serious about getting a job in AI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the teachers&lt;/strong&gt; &lt;br&gt;
The course is taught by the one and only Andrew Ng: co-founder of Coursera, Adjunct Professor at Stanford University and an outspoken AI advocate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Courses I intend to take (BONUS)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://korbit.ai/machinelearning"&gt;Learn AI With An AI&lt;/a&gt;&lt;br&gt;
This seems really interesting and it's the next one on my list. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://classroom.udacity.com/courses/ud810"&gt;Introduction to Computer Vision&lt;/a&gt;&lt;br&gt;
This course is a great companion for the Intro to Artificial Intelligence course and I hope it will broaden my knowledge on computer vision. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://developers.google.com/machine-learning/crash-course/"&gt;Machine Learning Crash Course&lt;/a&gt;&lt;br&gt;
This one puts more emphasis on the technical side and it's a good fit after you dabbled with TensorFlow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.udacity.com/course/intro-to-data-science--ud359"&gt;Intro to Data Science&lt;/a&gt;&lt;br&gt;
One of the most hot jobs in the world right now is the Data Scientist, so I think it's really useful to have an idea about the field, which intersects with AI.&lt;/p&gt;

&lt;p&gt;I hope these reviews will be useful for you and I can't wait to hear your feedback or the experiences you had with other AI courses.&lt;/p&gt;

&lt;p&gt;If you liked this article and want to see more of these, then follow me on &lt;a href="https://twitter.com/RautaAlin"&gt;twitter&lt;/a&gt; and on &lt;a href="https://dev.to/rautaalin"&gt;dev.to&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;P.S. I'm an indie maker and I'm writing a book on the basics of AI. If you want to support me and you're interested in AI, then you can pre-order my book at a discount here (you won't get charged until I finish the book): &lt;a href="https://gumroad.com/l/SXpw/sideproject"&gt;https://gumroad.com/l/SXpw/sideproject&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>deeplearning</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Why AI is the new electricity</title>
      <dc:creator>Alin Rauta</dc:creator>
      <pubDate>Thu, 09 Jan 2020 11:35:11 +0000</pubDate>
      <link>https://dev.to/rautaalin/why-ai-is-the-new-electricity-5bkp</link>
      <guid>https://dev.to/rautaalin/why-ai-is-the-new-electricity-5bkp</guid>
      <description>&lt;p&gt;This article was originally published on my &lt;a href="https://www.alinrauta.com/"&gt;blog&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;Artificial intelligence (AI) needs mostly two things to work its magic: data and computing power. Since the beginning of the 21st century we’ve been amassing an astonishing amount of data and we definitely we are not slowing down the pace. According to a report from &lt;a href="https://www.domo.com/solution/data-never-sleeps-6"&gt;DOMO&lt;/a&gt; “over 2.5 quintillion bytes of data are created every single day, and it’s only going to grow from there. By 2020, it’s estimated that 1.7MB of data will be created every second for every person on earth.” That’s really hard to fathom.&lt;/p&gt;

&lt;p&gt;Regarding computing power, Moore’s law is still standing and we’ve been seeing faster computers every year. If you add on top of that algorithmic improvements then is no wonder that AI started to yield great results. A significant milestone in the development of AI is the triumph of AlphaGo (a computer program that plays the board game Go) in 2017 against Ke Jie (who at the time continuously held the world No 1 ranking for two years).&lt;/p&gt;

&lt;h2&gt;
  
  
  Real world examples of AI
&lt;/h2&gt;

&lt;p&gt;Now, let’s get down to some real world applications of AI that is already influencing our lives. &lt;strong&gt;Self-driving cars&lt;/strong&gt; are a leading example of AI in practice that requires a combination search and planning to find the best route from A to B, computer vision to “see” what’s happening around the vehicle and decision making under uncertainty to cope with the complex and dynamic environment. Tesla motors is at the forefront of self-driving technology development and its founder, Elon Musk, said in April 2019 that we can expect this year a robo-taxi program(a car with no human driver that it drives itself). Other examples of autonomous systems are delivery robots, flying drones and autonomous ships.&lt;/p&gt;

&lt;p&gt;The main impact of self-driving cars that I see is the improving of road safety. Eventually, the AI behind self-driving cars will become better than humans and a lot of human lives will be saved. Besides that, drivers will gain time since they won’t have to pay attention to traffic and commuting will stop being just a waste of time and nerves. Let’s not forget the potential efficiency increase of logistics chains when transporting goods.&lt;/p&gt;

&lt;p&gt;There is nothing random about the content you’re seeing daily on social media. In the backstage there is an AI algorithm that is feeding you personalised content. This is called &lt;strong&gt;content recommendation&lt;/strong&gt; and is used by the social media giants, streaming services and search engines. According to &lt;a href="https://www.wipo.int/wipo_magazine/en/2019/03/article_0001.html"&gt;Andrew Ng&lt;/a&gt;, a computer science professor at Stanford, “today, the technology’s most lucrative application is probably determining whether consumers will click on an advertisement. Large online platforms are using this technology to create enormous economic value”. While it’s a good thing that we get more of what we like and we’re interested in, there is also the danger of getting caught in our own echo-chambers and become easier to manipulate.&lt;/p&gt;

&lt;p&gt;Unlocking your smartphone based on face recognition is becoming the norm, nothing special about it. The same with automatic tagging on social media or organising your photos according to people. This is also an application of AI and it’s called &lt;strong&gt;image and video processing&lt;/strong&gt;. At the &lt;a href="https://www.themsphub.com/content/ibm-and-the-digital-reinvention-of-the-us-open-2019/"&gt;2019 US Open&lt;/a&gt;, IBM used AI to create videos with the highlights of a tennis match. This kind of technology can be used to create trailers for motion pictures or extracting the gist from long videos, which is really useful in this era of content and knowledge abundance.&lt;/p&gt;

&lt;p&gt;Also, another use I could see for AI is the prevention of Australia wildfires through image processing. Using satellite or drones images to feed an AI we can assess the smallest possibility of bursting a fire and alert authorities, so they can prevent the fire expansion. I don’t know if that would solve it, but I think AI can help in these kind of situations. But, in the same manner as content recommendation people make their own choice if they use AI for good or for evil. Being able to use AI in order to generate or alter visual content can lead to creation of fake videos that we can’t distinguish from reality. This is called &lt;a href="https://en.wikipedia.org/wiki/Deepfake"&gt;deepfake&lt;/a&gt; and is media that take a person in an existing image or video and replace them with someone else’s likeness using artificial neural networks.&lt;/p&gt;

&lt;h2&gt;
  
  
  China has a plan
&lt;/h2&gt;

&lt;p&gt;From these few example we can notice how AI is starting to power more and more activities performed by humans, but also coming up with new ones that can’t be done by a person. This insight is not foreign for the powerful states of the world and &lt;a href="https://futureoflife.org/ai-policy-china/"&gt;The State Council of China&lt;/a&gt; released in 2017 the “New Generation Artificial Intelligence Development Plan” that outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030. As we can see in a &lt;a href="https://www.pbs.org/wgbh/frontline/film/in-the-age-of-ai/"&gt;PBS documentary&lt;/a&gt;, in China AI is already used in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;self-driving cars&lt;/li&gt;
&lt;li&gt;drones delivery in rural areas&lt;/li&gt;
&lt;li&gt;algorithm for loan applications&lt;/li&gt;
&lt;li&gt;shopping based on facial recognition&lt;/li&gt;
&lt;li&gt;cities to discourage jay walking or discover signs of civil unrest&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In its quest of AI supremacy, China is heading towards a surveillance state where cameras are watching you at every street corner. This situation rises questions and debates such as “protection at the expense of privacy”. We are being told that our data is collected for our own good: to get personalised content, tailored just for us and for our protection. Is this really how the way things are? That’s not a simple Yes/No question and this is why we have to educate ourselves on AI and the impact it’s already having in our life. Like in every aspect there are pessimists and optimists. The truth is probably in the middle, but we have to be informed about the subject before having our own opinion. Otherwise, we’ll just become another brick in the wall.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AI has the potential to change and improve our lives the same way electricity did in the 20th century. It helps us to diagnose diseases (sometimes better than humans do), drive us wherever we want or clean our house. AI can perform repetitive tasks, so we don’t have to and instead devout our time to more creative pursuits. I’m an optimist and I truly believe AI is going to enhance our capabilities and help everybody create wealth.&lt;/p&gt;

&lt;p&gt;P.S. By the way, if you liked this post and you got intrigued about the subject then I have good news. I’m writing a book on the basics of AI that you can pre-order here: &lt;a href="https://gum.co/SXpw/alanturing"&gt;https://gum.co/SXpw/alanturing&lt;/a&gt;. You won’t get charged until the book it’s ready. Also, with preordering you get a discount and become recipient of emails about the status of the book and you get a sneak peek of early material!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
