<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: EdemGold</title>
    <description>The latest articles on DEV Community by EdemGold (@edemgold).</description>
    <link>https://dev.to/edemgold</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/edemgold"/>
    <language>en</language>
    <item>
      <title>Fear and the Future: AI's Threat to job security - An interview with AI professor Alejandro Piad Morffis</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Mon, 05 Jun 2023 10:01:06 +0000</pubDate>
      <link>https://dev.to/edemgold/fear-and-the-future-ais-threat-to-job-security-an-interview-with-ai-professor-alejandro-piad-morffis-meb</link>
      <guid>https://dev.to/edemgold/fear-and-the-future-ais-threat-to-job-security-an-interview-with-ai-professor-alejandro-piad-morffis-meb</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“It would be pretty damn maddening if it turns out programmers are easier to automate than lawyers.” -Professor Alejandro Piad Morffis&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The increase in adoption of Large Language Generative AI models such as; ChatGPT, Microsoft Bing, Google Bard, Stable Diffusion, etc, while the advantages of these models cannot be refuted, it has led to an exaggerated and harrowing, but not baseless, fear by members of the public on the possibility of these AI models jeopardizing job security for millions of workers worldwide.&lt;/p&gt;

&lt;p&gt;As described earlier, the threat of AI to human jobs, while being exaggerated and harrowing, isn't baseless. The ability of AI to perform repetitive tasks and, process large amounts of information, and mimic human-like decision-making makes it a very good tool to enhance creativity, productivity, and efficiency.&lt;/p&gt;

&lt;p&gt;To answer the question; will AI take our jobs? I have enlisted the help of an expert by the name of Professor Alejandro Piad Morffis, A Professor of AI at the University of Havana, Cuba. The Professor is a mentor, teacher, friend, and, most importantly, inspiration to me.&lt;/p&gt;

&lt;p&gt;How I hope to approach this&lt;br&gt;
The questions will be pre-fixed with the "Q" letter while the answers will be pre-fixed with the "A" letter. With regards to the questions, I hope to cover technical and philosophical questions as Professor Morffis also has an affinity for the Philosophical. It is also important to note that I will provide links to certain concepts that are complex to grasp, for the sake of understanding.&lt;/p&gt;

&lt;p&gt;Let us Begin!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Firstly, Could you tell us a bit about yourself, your professional qualifications and such&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: My name is Alejandro Piad, I majored in Computer Science at the School of Math and Computer Science at the University of Havana, Cuba. I did a Master's in Computer Science also at the same college in 2016 and earned a double PhD in Computer Science at the University of Alicante and a PhD in Math at the University of Havana in 2021. My PhD was in knowledge discovery from natural language, specifically focused on entity and relation extraction from medical text.&lt;/p&gt;

&lt;p&gt;Since grad school I've been teaching at the University of Havana, I've been the main lecturer in Programming, Compilers, and Algorithm Design, and also an occasional lecturer on Machine Learning and other subjects. Since 2022 I'm a full-time professor there, I was also one of the founders of the new Data Science career, the first of its kind in Cuba, and I wrote the entire Programming and Computing Systems curriculum for that career, I keep doing research in NLP, right now focusing on neuro symbolic approaches to knowledge discovery, mixing LLMs with symbolic systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How long have you been working with AI systems?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: I played with AI for games as an undergrad student, and did a couple of student projects with computer vision and metaheuristics. After graduating, I started my master’s in Computer Graphics, but as a side project, I did some research in NLP, specifically on sentiment analysis on Twitter. After finishing the master I started thinking about doing a PhD and got all in with machine learning. So you could say around 10 years since I started taking AI seriously. My oldest paper related to this stuff is around 2012.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: That is intensely impressive! You worked with AI way before it became cool. What do you believe is the singular most significant technical advancement in AI which has contributed to its current mainstream adoption and the consequent job displacement threats?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: Well it was always cool, just not outside academia. I'd say, the intersection of two orthogonal developments: the discovery of artificial neural network architectures such as the Transformer, which solved many of the scalability problems of previous architectures, and the invention of hardware where you can run those specific architectures at scale super efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Fascinating! In your professional opinion as an educator and an AI researcher, what industries stand the risk of being replaced by AI?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: I don't know if any industry will be replaced entirely but I'm sure there will be massive changes. In the long term, of course, no one can say anything. But in the short and mid-term (5-10 years), with what we're seeing with language models, my bet is that anyone whose job is predicated on the shallow filtering and processing of natural language will have some reckoning to do. This includes all sorts of managerial roles, including anyone whose job is to read emails, summarize, and build reports. Any kind of secretary who doesn't go beyond note-taking and task scheduling. Copywriters who work with templated content.&lt;/p&gt;

&lt;p&gt;Basically, any content creation task below the level of actual human creativity will be cheaper to automate than paying a human stochastic parrot. So those will go away. One single copywriter using the ChatGPT of the near future will be able to craft, in hypothesis, 3x to 10x more content with the same quality. Not because the model will give them the final quality they aim, for but because the model will give them 90% of the quality, and then the real human creativity comes as a cherry on top and adds the final 10%. Education has to change considerably, too. We can talk more about that if you want.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: This is a really nice angle. You're an educationist and from your piece titled "Rethinking college education," you obviously know how change-averse the educational institution is. Do you think formal education can be depended upon for survival in the post-AI world?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: Yeah, academia will adapt. It is the longest-living institution in Western civilization. It predates all our mainstream religions, and it has survived all major civilization changes. It will change substantially, as it has changed across the ages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How important is it for society to consider the ethical implications of AI and job displacement?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: All technology has potential issues, and the more advanced the tech the more pressing it is to consider them. AI is a very powerful technology with the potential to disrupt all our economic relationships. It is something at the level of an industrial revolution, so it will have massive implications, and so the concern must be at the same level. One thing that is different from previous disruptive tech is that mostly, new tech automates the jobs that require the least cognitive skills, it happened with agriculture, manufacturing, mining, etc.&lt;/p&gt;

&lt;p&gt;However, this time we are on the brink of replacing a large number of white-collar jobs while leaving lots of blue-collar jobs undisrupted. so we will have lots of people who are used to working in offices finding that an AI can do their job as well (or maybe slightly better) and much cheaper, so they will either have to upgrade their skills significantly or they will have to turn to less skilled jobs. there are other ethical considerations, there is a lot of potential for misuse of AI technologies for misinformation, fake news, social disruption, etc. I don't think we are prepared for a massive number of human-like chatbots taking over Twitter, it is already starting to happen.&lt;/p&gt;

&lt;p&gt;There are also bias issues, as these systems become more and more pervasive, the harms can be very focused on the minorities, so everyone will not reap the benefits of AI to the same degree, some minorities will get the downsides more strongly than those not from minorities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: So in other words, we should pay attention because, unlike past forms of automation, AI has the potential to disrupt cognitively tasking jobs as well.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: Yeah, especially those jobs. It will automate more white-collar jobs than blue-collar jobs, at least in the near term. That's something new and society isn't used to having to deal with that kind of job disruption. These are folks that went to college and more or less got convinced their jobs were safe, or at least safer than taxi drivers, pizza boys, gardeners, you mention it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: This makes sense, let's hit a little bit close to home. Do you believe that an increase in AI capability will ultimately lead to a decrease in overall employment for software engineers/developers?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: In the very long term, all jobs will evolve in unpredictable ways, including software engineering and development. AI and technological advancements will transform these professions to the point where they may seem to have disappeared.&lt;/p&gt;

&lt;p&gt;However, in the short to midterm, a decrease in software engineers is unlikely due to the increasing demand for software across various industries. This growing need for skilled professionals far surpasses the current number of trained individuals capable of building software.&lt;/p&gt;

&lt;p&gt;The AI revolution will follow a similar pattern as previous technological breakthroughs in computer science such as compilers, integrated development environments, cloud computing, containers, code completion and IntelliSense. These innovations made programming more accessible for those without highly formal backgrounds and expanded opportunities for developers.&lt;/p&gt;

&lt;p&gt;Over the next 20 years, we can expect an explosion of people entering the field of software development. Although job roles may change somewhat with evolving technology trends, there will likely be continued growth for those interested in learning how to program and write code.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Q: This is incredible, although, the release of Generative AI models such as GitHub's Co-Pilot and the GPT family of models has prompted (forgive my pun) rumours about the possibility of software developers losing their jobs to AI. What do you say about this? *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A: Look at the numbers. All I'm seeing are more job ads for software developers. The trend is still climbing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Jeff Clune, an ex-OpenAI engineer, recently made a prediction at the AI Safety Debate conference, he stated that there was a 30 percent chance that AI will be capable of handling "50% of economically valuable work" by the year 2030, what would this mean for the overall developer labour market?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: First I have no idea how you would wrap your head around what a 30% chance of automating a 50% of jobs even looks like. Is it a 15% expectation of losing your job?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q:I guess the numbers do make for a confusing scene. But the essential point is; Software developers have lots of reasons to be worried about their job security and many of the tasks they currently spend lots of time on are being automated. The pace at which that occurs will accelerate.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: Yeah but the thing is, many of the tasks developers spend most of their time on are pretty low value and we would be much better off if they were automated: debugging, writing tests, doing pesky code optimizations. As you automate all of that we'll have more time for the really important parts of software development, which was never about writing code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Could you speak more about those parts?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: High level and architecture design, user experience, human-computer interaction, and that's just about the software itself. Software engineering is really about the relationship between software and people, both people that make software, and people that use software. So software skills are only half of the story. Understanding your users and colleagues is the other half.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: This is similar to how the core job of accountants is to communicate financial information and not create financial statements, fascinating! It's a fair bet to say AI capabilities will increase in 10-20 years. How prepared are we as a society &amp;amp; species to address the potential job displacement/loss brought on by the potential adoption of AI? How does this affect our sense of purpose as human beings?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: Very hard to say, of course, we're in the middle of an industrial revolution as big as at least the microprocessor revolution or the internet revolution, no one in 1960 could imagine what 1980 would look like.&lt;/p&gt;

&lt;p&gt;Society is never ready for change, by definition. That's what a system is, something that strives to maintain its status quo. But humans are the most adaptable social species out there, so I think we'll manage. Lots of people will suffer, and that's something we have to work on, definitely, but nothing apocalyptic in my opinion will happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: There have been a lot of talks on the dystopian potential of AI. Why do you say nothing apocalyptic will occur?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: I still haven't seen any really compelling arguments for the doomsday scenario. Lots of the arguments seem to be predicated on reasoning like "we don't know how this is going to evolve so it will probably kill us all" and that's a classic logical fallacy: you're basically making an inference from lack of knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: This is true. But the AI alignment problem does seem plausible.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: I think we will solve it, at least well enough to avoid apocalyptic scenarios. The most severe alignment issues require you to believe in a powerful version of the orthogonality thesis that I don't believe plausible.&lt;/p&gt;

&lt;p&gt;Subscribed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Fascinating! Going back to automation, How can we leverage AI to augment human work rather than replace it and what industries are ripe for this kind of collaboration?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: I think that's only natural, as we automate more and more of the menial cognitive work (e.g., summarizing documents or finding relevant references) we humans will get to work on the most creative parts of our jobs. Some jobs have very little of that to begin with, and there I see a challenge because maybe those will be completely or almost completely automated away. But most knowledge work has a creative side, the part where you actually do something novel.&lt;/p&gt;

&lt;p&gt;As to which fields are ripe for this, I can't talk about much else but in education at least I think we're bound for a long-needed revolution. We professors no longer need to be gatekeepers of information. Instead of spending most of our time grading the same essays over and over, we can now focus on giving the best possible personal feedback to each student.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: What are the possible ways AI could revolutionise the educational system? Perhaps more teaching techniques adapted optimally to students.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: There are a few easy ways and then some not that easy. First is just a matter of increasing access to knowledge. Now almost everything you want to learn, you can find relevant information on the internet at least to begin with, but it is often split around many sources with disparate levels of detail, contradictory stuff, different linguistic styles, etc. The first relatively easy application is just here take this bunch of sources on some topic and give me a high-level overview of the main takeaways summarized, with links to dive deeper, etc. We are pretty close to that (baring the hallucinations which are a significant problem).&lt;/p&gt;

&lt;p&gt;Another way is by simply freeing educators from menial tasks to give them more time to focus on creating learning experiences. But by far the most important thing I believe is the potential for personalized learning. You could have an AI assistant and tell it "I want to learn how to make a rocket" and it could create a very detailed plan, especially for you, based on what it already knows that you know, it would tell you, here, first watch this video, now take this short course, now read this chapter of this book, ... And guide you for 3 months to learn something very specific.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: This is truly promising! You make a solid case for humans adapting. You spoke about bias, is it fair to say large-scale AI adoption will affect the minority? If yes how can this be combated&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: Yeah definitely, machine learning is by definition trained on the majority, so it will always hurt the most those whose use case doesn't fit the majority for any reason. In particular, whenever you train models to predict human behaviour or interact with humans, it tends to work better for the subpopulations that are best represented in the data.&lt;/p&gt;

&lt;p&gt;What can you do? Start by raising awareness of these issues and make sure to thoroughly test your models for bias. Be very careful about how you collect data, don't go the easy way and crawl the web, and make an effort to find good high quality and high-diversity sources of data.&lt;/p&gt;

&lt;p&gt;But more than anything include diverse people with diverse points of view in your team. You can't solve a problem you can't see.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: This makes me think. Is there a possibility that access to said AI tools will be relegated to the financially capable?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: I'm hoping the open-source community will make the tools available to all. We already have seen what having access to a free operating system, a free office suite, a free game engine, a free code editor, etc., does for the creative kids of the poorer parts of the world. I trust we will have open-source AI tools as good as commercial ones, the same way we have open-source dev tools as good as commercial ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: This seems really feasible. What advice would you give to your students to prepare them for the workforce in a post-AI world?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: If you are already studying computer science, the basic advice is to focus on fundamentals, not just tools. Tools will change but the fundamentals will remain relevant for a long time. If studying something else, learn how AI can improve your productivity, and learn a lot about its limitations. Use it to make your own work better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q:This makes sense, fundamentals will stand the test of time. Thank you very much for your time Professor Morffis. Any closing words?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A: The AI revolution is here. We can all either be a part of it, by learning to use this technology for making good and improving the lives of everyone.&lt;/p&gt;

&lt;p&gt;If you enjoyed this article, &lt;a href="https://edemgold.substack.com/subscribe?"&gt;Invest in the writer&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>career</category>
    </item>
    <item>
      <title>Understanding the Brain Inspired Approach to AI</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Sun, 07 May 2023 00:46:59 +0000</pubDate>
      <link>https://dev.to/edemgold/understanding-the-brain-inspired-approach-to-ai-n5d</link>
      <guid>https://dev.to/edemgold/understanding-the-brain-inspired-approach-to-ai-n5d</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Our Intelligence is what makes us human and AI is an extension of that Quality" -Yan LeCun&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Since the &lt;a href="https://edemgold.substack.com/p/the-history-of-ai"&gt;advent&lt;/a&gt; of Neural Networks (also known as artificial neural networks), the AI industry has enjoyed &lt;a href="https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report-_Chapter-1.pdf"&gt;unparalleled success&lt;/a&gt;. Neural Networks are the driving force behind modern AI systems and they are modelled after the human brain. Modern AI research involves the creation and implementation of algorithms that aim to mimic the neural processes of the human brain with the aim of creating systems that learn an act in ways similar to human beings. In this article we will attempt to understand the brain inspired approach to building AI systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I hope to approach this
&lt;/h2&gt;

&lt;p&gt;This article will begin by providing background history on how researchers began to model AI to mimic the human brain and end by discussing the challenges currently being faced by faced by researchers in attempting to imitate the human brain.  Below is an in-depth description of what to expect from each section.  &lt;/p&gt;

&lt;p&gt;It is worth noting that while this topic is an inherently broad one, I will necessarily be as brief and succinct as possible so as to maintain interest while providing a broad overview and I plan to treat sub-topics which have more intricate sub-branches as standalone articles, and I will, of course, leave references at the end of the article . &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;History of the brain inspired approach to AI:&lt;/strong&gt;&lt;br&gt;
Here we'll discus about how scientists Norman Weiner ann Warren McCulloch brought about the convergence of neuroscience and computer science, how Frank Rosenblatt's Perceptron was the first real attempt to mimic human intelligence and how it's failure brought about ground breaking work which would serve as the platform for Neural Networks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How the human brain works and how it relates to AI systems:&lt;/strong&gt; &lt;br&gt;
In this section we'll dive into the biological basis for the brain inspired approach to AI. We will discuss the basic structure and functions of the human brain, understand its core building block, the neuron, and how they work together to process information and enable complex actions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Core Principles behind the brain inspired approach to AI:&lt;/strong&gt;&lt;br&gt;
Here we will discuss the fundamental concepts behind the brain inspired approach to AI. We will explain how concept such as; Neural networks, Hierarchical processing, plasticity and how techniques parallel processing, distribute representations, an recurrent feedback aid AI in mimicking the brain's functioning. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Challenges in building AI systems modelled after the human brain:&lt;/strong&gt;&lt;br&gt;
Here we will talk about the challenges and limitations inherent in attempting to build systems that mimic the human brain. Challenges such as; the complexity of the brain, the lack of a unified theory of cognition an explore the ways these challenges an limitation are being adressed. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let us begin!&lt;/p&gt;

&lt;h2&gt;
  
  
  The History of the brain inspired approach to AI
&lt;/h2&gt;

&lt;p&gt;The drive to build machines that are capable of intelligent behaviour owes much inspiration to MIT Professor, &lt;a href="https://en.wikipedia.org/wiki/Norbert_Wiener"&gt;Norbert Weiner&lt;/a&gt;. Norbert Weiner was a child prodigy who could read by the age of three. He had a broad knowledge of various fields such as Mathematics, Neurophysiology, medecine, and physics. Norbert Weiner believed that the main opportunities in science lay in exploring what he termed as &lt;em&gt;Boundary Regions&lt;/em&gt; -areas of study that are not clearly within a certain discipline but rather a mixture of disciplines like the study of medicine and engineering coming together to create the field of Medical Engineering-, he as quote saying:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"If the difficulty of a physiological problem is mathematical in nature, ten physiologists ignorant of mathematics will get precisely as far as one physiologist ignorant of mathematics"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the year 1934, Weiner and a couple of other academics gathered monthly to discuss paper involving boundary region science. Weiner was quoted saying:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It was a perfect catharsis for half baked ideas, insufficient self-criticism, exaggerated self confidence and pomposity"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wyLdpych--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5f3q020mxgygnru49wd0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wyLdpych--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5f3q020mxgygnru49wd0.jpg" alt="Norman Weiner" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From these sessions and from his own personal research, Weiner learned about new research on biological nervous systems as well as about pioneering work on electronic computers, and his natural inclination was to blend these two fields and so a relationship between neuroscience and computer science was formed and this relationship became the cornerstone for the creation of artificial intelligence, as we know it.&lt;/p&gt;

&lt;p&gt;After World War II, Wiener began forming theories about intelligence in both humans and machines an this new field was named &lt;em&gt;&lt;a href="https://en.wikipedia.org/wiki/Cybernetics"&gt;Cybernetics&lt;/a&gt;&lt;/em&gt;. Wiener ha successfully gotten scientists talking about the possibility of biology fusing with engineering and of the said scientists was a neurophysiologist named &lt;a href="https://en.wikipedia.org/wiki/Warren_Sturgis_McCulloch"&gt;Warren McCulloch&lt;/a&gt;. Warren Mcculloch then proceeded to drop out of Haverford university and went to Yale to study philosophy and psychology. While attending a scientific conference in New York, he came in contact with papers written by colleagues on biological feedback mechanisms. The following year, in collaboration with his brilliant 18 year old portage named Walter Pitts, McCulloch proposed a theory about how the brain works -a theory that would help foster the widespread perception that computer and brains function essentially in the same way.&lt;/p&gt;

&lt;p&gt;They based their conclusions on research by McCuloch on the possibility of neuron to process Binary Numbers (for the unknowledgeable, computers communicate via binary numbers). This theory became the foundation for what became the first model of an artificial neural network, which was named the McCulloch-Pitts Neuron(MCP). &lt;/p&gt;

&lt;p&gt;The MCP served as the foundation for the creation of the first ever neural network which came to be known as &lt;a href="https://edemgold.substack.com/p/the-history-of-ai"&gt;the perceptron&lt;/a&gt;. The Perceptron was created by Psychologist, &lt;a href="https://en.wikipedia.org/wiki/Frank_Rosenblatt"&gt;Frank Rosenblatt&lt;/a&gt; who, inspired by the synapses in the brain, decided that as the human brain could process and classsify information through synapses(communication between neurons) then perhaps a digital computer could do the same via a neural network. The Perceptron essentially scaled the MCP neuron from one artificial neuron into a network of neurons, but, unfortunately, the perceptron had some technical challenges which limited its practical application, most notable of it's limitations was its inability to perform complex operations(like classify between more than one item, for example a perceptron could not perform classification between a cat, a dog, and a bird).&lt;/p&gt;

&lt;p&gt;In the year 1969, a book published by &lt;a href="https://en.wikipedia.org/wiki/Marvin_Minsky"&gt;Marvin Minsky&lt;/a&gt; and &lt;a href="https://en.wikipedia.org/wiki/Seymour_Papert"&gt;Seymour Papert&lt;/a&gt; titled &lt;em&gt;Perceptron&lt;/em&gt; lay out in detail the flaws of the Perceptron and due to that research on Artificial Neural Networks stagnated until until the proposal of Back Propagation by &lt;a href="https://en.wikipedia.org/wiki/Paul_Werbos"&gt;Paul Werbos&lt;/a&gt;. Back Propagation hope to solve the issue of classifying complex data that hindered the industrial application of Neural Network at the time. It was inspired by synaptic plasticity; the way the brain modifies the strengths of connections between neurons and as such improve performance. Back Propagation was designed to mimic the process the brain strengthens connections between neurons via a process called weight adjustment. &lt;/p&gt;

&lt;p&gt;Despite the early proposal by Paul Werbos, the concept of back propagation only gained widespread adoption when researchers such as &lt;a href="https://en.wikipedia.org/wiki/David_Rumelhart"&gt;David Rumelheart&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Geoffrey_Hinton"&gt;Geoffrey Hinton&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Ronald_J._Williams"&gt;Ronal Williams&lt;/a&gt; published papers that demonstrated the effectiveness of back propagation for training neural networks. The implementation of back propagation led to the creation of Deep Learning which powers mosts of the AI systems available in the world. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"People are smarter than today's computers because the brain employs a basic computational architecture that is more suited to deal with a central aspect of the natural information processing tasks that people are so good at." - Parallel Distributed Processing&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How the human brain works and how it relates to AI systems
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wOQRcVTS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/owbb2xn5g7naf2vr9okf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wOQRcVTS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/owbb2xn5g7naf2vr9okf.png" alt="A biological neuron beside an Artificial neuron" width="602" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have discussed how researchers began to model AI to mimic the human brain, let us now look at how the brain works and define the relationship between the brain and AI systems. &lt;/p&gt;

&lt;h3&gt;
  
  
  How the brain works: A simplified description
&lt;/h3&gt;

&lt;p&gt;The human brain, essentially process thoughts via the use of neurons, a neuron is made up of 4 core sections; Dendrite, Axon, and the Soma. The Dendrite is responsible for receiving signals from other Neurons, The Soma processes information received from the Dendrite, and the Axon is responsible for transferring the processed information to the next Dendrite in the sequence. &lt;/p&gt;

&lt;p&gt;To grasp how the brain processes thought, imagine you see a car coming towards you, your eyes immediately send electrical signals to your brain through the optical nerve and then the brain forms a chain of neurons to make sense of the incoming signal and so the first neuron in the chain collects the signal through its &lt;strong&gt;Dendrites&lt;/strong&gt; and sends it to the &lt;strong&gt;Soma&lt;/strong&gt; to process the signal after the Soma finishes with its task it sends the signal to the &lt;strong&gt;Axon&lt;/strong&gt; which then sends it to the Dendrite of the next neuron in the chain, the connection between Axons and Dendrites when passing on information is called a Synapse. So the entire process continues until the brain finds a &lt;strong&gt;Sapiotemporal Synaptic Inpu&lt;/strong&gt;t(that's scientific Lingo for the brain continues processing until it finds an optimal response to the signal sent to it) and then it sends signals to the necessary effectors eg, you're legs and then the brain sends a signal to your legs to run away from the oncoming car.&lt;/p&gt;

&lt;h3&gt;
  
  
  The relationship between the brain and AI systems
&lt;/h3&gt;

&lt;p&gt;The relationship between the brain and AI is largely mutually beneficial with the brain being the main source of inspiration behind the design of AI systems and advances in AI leading to better understanding of the brain and how it works. &lt;/p&gt;

&lt;p&gt;There is a reciprocal exchange of knowledge and ideas when it comes to the brain and AI, and there are several examples that attest to the positively symbiotic nature of this relationship;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Neural Networks:&lt;/strong&gt; Arguably the most significant impact made by the human brain to the field Artificial Intelligence is the creation of Neural Networks. In essence, Neural Networks are computational models that mimic the function and structure of biological neurons, the architecture of neural networks and their learning algorithms are largely inspired by the way neurons in the brain interact and adapt.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Brain Simulations:&lt;/strong&gt; AI systems have been used to &lt;a href="https://www.frontiersin.org/articles/10.3389/fncom.2020.00016/full"&gt;simulate&lt;/a&gt; the human brain and study its interactions with the physical world. For example, researchers have Machine Learning techniques to simulate the activity of biological neurons involved in visual processing, and the result have provide insight into how the brain handles visual information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inights into the brain:&lt;/strong&gt; Researchers have begun using Machine Learning Algorithms to analyse and gain insights from brain data, and fMRI scans. These insights serve to identify patterns and relationships which would otherwise have remained hidden. The inights gotten can help in the understanding of internal cognitive functions, memory, and ecision making, it alsos aids in the treatment of brain native illnesses such as Alzheimers. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Core Principles behind the brain inspired approach to AI
&lt;/h2&gt;

&lt;p&gt;Here we will discuss several concepts which aid AI in imitating the way the human brain functions. These concepts have helped AI researchers create more powerful and intelligent systems which are capable of performing complex tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Neural Networks
&lt;/h3&gt;

&lt;p&gt;As discussed earlier, neural networks are arguably the most signifant impact made by the human brain to the field Artificial Intelligence. In essence, Neural Networks are computational models that mimic the function and structure of biological neurons, the networks are made up of various layers of interconnected nodes, called artificial neurons, which aid in the processing and transmitting of information, similar to what is done by dendrites, somas, and axons in biological neural networks. Neural Networks are architected to learn from past experiences the same way the brain does.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed Representations
&lt;/h3&gt;

&lt;p&gt;Distributed representations are simply a way of encoding concepts or ideas in a neural network as a pattern along several nodes in a network in order to form a pattern. For example, the concept of smoking could be represented(encoded) using a certain set of nodes in a neural network and so if that network comes along an image of a man smoking it then uses those selected nodes to make sense of the image (it's a lot more complex than that but for the sake of simplicity), this technique aids AI systems in remembering complex concepts or relationships between concepts the same way the brain recognizes and remembers complex stimuli.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recurrent Feedback
&lt;/h3&gt;

&lt;p&gt;This is a technique used in training AI models where the output of a neural network is returned back as input in order to allow the network to integrate its output as extra data input in training. This is similar to how the brain makes use of feedback loops in order to adjust its model based on previous experiences. &lt;/p&gt;

&lt;h3&gt;
  
  
  Parallel Processing
&lt;/h3&gt;

&lt;p&gt;Parallel processing involves breaking up complex computational tasks into smaller bit in an effort to process the smaller bits on other processor in attempt to improve speed. This approach enables AI systems to process more input data faster, similar to how the brain is able to perform different tasks at the same time(multi-tasking).&lt;/p&gt;

&lt;h3&gt;
  
  
  Attention Mechanisms
&lt;/h3&gt;

&lt;p&gt;This is a techniue used which enables AI models to focus on specific parts of input data, it is commonly employed in sectors such as Natural Language Processing which contain complex an cumbersome data. It is inpspired by the brain's ability to attend to only specific parts of a largely distracting environment; like your ability to tune into and interact in one conversation out of a cacophony of conversations. &lt;/p&gt;

&lt;h3&gt;
  
  
  Reinforcement Learning
&lt;/h3&gt;

&lt;p&gt;Reinforcement Learning is a technique used to train AI systems which was inspired by how human beings learn skills through trial an error. It involves an AI agent receiving rewards or punishments based on its actions, this enables the agent to learn from its mistakes and be more efficient in it's future actions (this technique is usually used in the creation of games).&lt;/p&gt;

&lt;h3&gt;
  
  
  Unsupervised Learning
&lt;/h3&gt;

&lt;p&gt;The brain is constantly receiving new streams of data in the form of sounds, visual content, sensory feelings to the skin, etc and it has to make sense of it all and attempt to form a coherent an logical understanding of how all these seemingly disparate events affect its physical state.&lt;/p&gt;

&lt;p&gt;Take this analogy as an example, you feel water drip on your skin, you hear the sound of water droplets dropping quickly on rooftops, you feel your clothes getting heavy and in that instant you know rain is falling, you then search your memory bank to ascertain if you carried an umbrella and if you did, you are fine, else, you check to see the distance from your current location to your home, if it is close, you are fine, else you try to gauge how intense the rain is going to become, if it is a light drizzle you can attempt to continue journey back to your home, but if it is priming to become a shower, then you have to fin shelter.&lt;/p&gt;

&lt;p&gt;The ability to make sense of seemingly disparate data points(water, sound, feeling, distance) is implemented in Artificial intelligence in the form of a technique called Unsupervised Learning. It is an AI training technique where AI systems are taught to make sense of raw, unstructured data without explicit labelling( no one tell you rain is falling when it is falling, do they/).&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges in Building Brain Inspired AI systems
&lt;/h2&gt;

&lt;p&gt;We have spoken about how the approach for using the brain as inspiration for AI systems came about, how the brain relates to AI, and the core principles behind brain inspired AI. In this section, we are going to talk about some of the technical and conceptual challenges inherent in building AI systems modelled after the human brain. &lt;/p&gt;

&lt;h3&gt;
  
  
  Complexity
&lt;/h3&gt;

&lt;p&gt;This is a pretty daunting challenge. The brain inspired approach to AI is based on modelling the brain and building AI systems after that model, but the human brain is an inherently complex system with 100 billion neurons and approximately 600 trillion synaptic connections(each neuron, has on average, 10,000 synaptic connections with other neurons), and these synapses are constantly interacting in dynamic an unpredictable ways. Building AI systems that are aimed to mimic, and hopefully exceed, that complexity is in itself a challenge and requires equally complex statistical models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Requirements for training Large Models
&lt;/h3&gt;

&lt;p&gt;Open AI's GPT 4, which is presumably at the cutting edge of AI models required 47 Giga Bytes of data, in comparison its predecessor GPT3 was trained on 17 Gigabytes of data, that is approximately 3 orders of magnitude higher, imagine how much GPT 5 will be trained on. As has been proven in order to get acceptible results, Brain Inspired AI systems require vast amounts of data and data for tasks especially auditory and visual tasks and this places a lot of emphasis on the creation of data collection pipelines, for instance, Tesla has 780 million miles of driving data and its data collection pipeline adds another million every 10 hours. &lt;/p&gt;

&lt;h3&gt;
  
  
  Energy Efficiency
&lt;/h3&gt;

&lt;p&gt;Building brain inspired AI systems that emulate the brain' energy efficency is a huge challenge,. the human brain consumes approximately 20 watts of power, in comparison, Tesla's Autopilot, on specialized chips, consumes about 2,500 watts per second and&lt;a href="https://ts2.space/en/exploring-the-environmental-footprint-of-gpt-4-energy-consumption-and-sustainability/#:~:text=The%20paper%20found%20that%20the,hours%20(MWh)%20of%20energy."&gt;it takes around&lt;/a&gt; 7.5 megawatt hours(MWh) to train an AI model the size of ChatGPT.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Explainaibility Problem
&lt;/h3&gt;

&lt;p&gt;Developing brain inspired AI sytems that can be trusted by users is crucial to the growth and adoption of AI, but therein lies the problem, the brain, which AI systems are meant to be modelled after,  are essentially a black box. The inner workings of the brain are not easy to understand, this is due to the lack of information sorrounding how the brain processes thought, there is no lack of research on how the biological structure of the human brain but there is a certain lack of empirical information on the functional qualities of the brain, that is, how thought is formed, how deja vu occurs, etc, and this leads to a problem in the building of brain inspired AI systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Interdisciplinary Requirements
&lt;/h3&gt;

&lt;p&gt;The act of building brain inspired AI systems requires the knowledge of experts of different fields, such as; Neuroscience, Computer Science, Engineering, Philosophy, and Psychology. But there lies a challenge there both logistical and foundational, in the sense that getting experts from different fields very financially tasking and also there lies the problem of knowledge of knowledge conflict; it is is really difficult to get an engineer to care about the psychological effects of what he/she is building, not to talk of the problem of egos colliding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In Conclusion, while the brain inspired approach is the obvious route to building AI systems(we have discussed why), it is wrought with challenges but we can look to the future with hope that the efforts are being made to solve these problems.&lt;/p&gt;

&lt;p&gt;If you enjoyed the article, you can &lt;a href="//edemgold.substack.com"&gt;subscribe to my newsletter&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.freecodecamp.org/learn/machine-learning-with-python"&gt;FreeCode Camp Machine Learning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.tesla.com/VehicleSafetyReport#:~:text=Because%20every%20Tesla%20is%20connected,the%20different%20ways%20accidents%20happen."&gt;Tesla' Vehicle Safety Report&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/1906.01703"&gt;Basic Neural Units of the Brain: Neurons, Synapses and Action Potential&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/pdf/2303.15935.pdf"&gt;When Brain inspired AI meets AGI &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://towardsdatascience.com/perceptron-the-artificial-neuron-4d8c70d5cc8d"&gt;Perceptron: The artificial Neuron (An Essential Upgrade To The McCulloch-Pitts Neuron)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/towards-data-science/mcculloch-pitts-model-5fdf65ac5dd1"&gt;McCulloch-Pitts Neuron — Mankind’s First Mathematical Model Of A Biological Neuron&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://axon.cs.byu.edu/Dan/678/papers/Recurrent/Werbos.pdf"&gt;BackPropagation through time: What it does and how to do it&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://edemgold.substack.com/p/the-history-of-ai"&gt;The history of AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.frontiersin.org/articles/10.3389/fncom.2020.00016/full"&gt;BrainOS: A Novel Artificial Brain-Alike Automatic Machine Learning Framework&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
    <item>
      <title>The History of AI</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Sat, 08 Apr 2023 13:28:20 +0000</pubDate>
      <link>https://dev.to/edemgold/the-history-of-ai-58g</link>
      <guid>https://dev.to/edemgold/the-history-of-ai-58g</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. - John McCarthy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From the dawn of time, human beings have always been fascinated and  interested in the building of Machines which display intelligence, for instance,  the Ancient Egyptians and Romans who were awe-struck by religious statues, clearly manipulated by priests, that gestured and some prophecies. &lt;/p&gt;

&lt;p&gt;Medieval lore is packed with tales like those, of items which could move and talk like their human masters, there have been stories of Sages from the middle ages which had access to a &lt;a href="https://en.wikipedia.org/wiki/Homunculus"&gt;homunculus&lt;/a&gt; -a small artificial man that was actually a living sentient being-, in fact 16th century Swiss Philosopher &lt;a href="https://en.wikipedia.org/wiki/Theophrastus"&gt;Theophrastus Bombastus&lt;/a&gt; was quoted saying &amp;gt; We shall be like gods. We shall duplicate God's greatest miracle-the creation of man. Our species latest attempt at creating synthetic intelligence is now known as AI.&lt;/p&gt;

&lt;p&gt;In this article I hope to provide (comprehensively) a history of Artificial Intelligence right from its leser known days (when it wasn't even called AI) to the current age of Generative AI. &lt;/p&gt;

&lt;h2&gt;
  
  
  How I hope to approach this
&lt;/h2&gt;

&lt;p&gt;This article will break down the history of AI into nine(9) milestones. The milestones will be expanded upon, it should be noted that milestones will be not be treat as disparate and unrelated, rather, their links to the overall history of Artificial Intelligence and the progression from immediate past milestones will be discussed as well. Below are the milestones to be covered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Dartmouth Conference&lt;/li&gt;
&lt;li&gt;The Perceptron&lt;/li&gt;
&lt;li&gt;The AI boom of the 1960s&lt;/li&gt;
&lt;li&gt;The AI winter of the 1980s &lt;/li&gt;
&lt;li&gt;Expert Systems&lt;/li&gt;
&lt;li&gt;The Emergence of Natural Language Processing and Computer Vision&lt;/li&gt;
&lt;li&gt;The Rise of Big Data&lt;/li&gt;
&lt;li&gt;Deep Learning&lt;/li&gt;
&lt;li&gt;Generative AI
## The Dartmouth Conference
The &lt;a href="https://en.wikipedia.org/wiki/Dartmouth_Conference"&gt;Dartmouth Conference&lt;/a&gt; of 1956 is a seminal event in the history of AI, it was a summer research project that took place in the year 1956 at Dartmouth College in New Hampshire, USA. The conference was the first of its kind, in the sense that it brought together researchers from seemingly disparate fields of study; Computer Science, Mathematics, Physics, etc with the sole aim of exploring the potential of Synthetic Intelligence(the term AI hadn't been coined yet), The participants included &lt;a href="https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)"&gt;John McCarthy&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Marvin_Minsky"&gt;Marvin Minsky&lt;/a&gt; and other prominent scientists and researchers. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;During the conference, the participants discussed a wide range of topics related to AI, such as natural language processing, problem-solving, and machine learning. They also laid out a roadmap for AI research, including the development of programming languages and algorithms for creating intelligent machines. This conference is considered a seminal moment in the history of AI as it marked the birth of the field an also the moment the name &lt;em&gt;"Artificial Intelligence"&lt;/em&gt; was coined.&lt;/p&gt;

&lt;p&gt;The Dartmouth Conference had a significant impact on the overall history of AI. It helped to establish AI as a field of study and encouraged the development of new technologies and techniques. The participants set out a vision for AI, which included the creation of intelligent machines that could reason, learn, and communicate like human beings. This vision sparked a wave of research and innovation in the field.&lt;/p&gt;

&lt;p&gt;Following the conference, John McCarthy and his colleagues went on to develop the first AI programming language, &lt;a href="https://en.wikipedia.org/wiki/Lisp_(programming_language)"&gt;LISP&lt;/a&gt;. This language became the foundation of AI research and remains in existence today.&lt;/p&gt;

&lt;p&gt;The conference also led to the establishment of AI research labs at several universities and research institutions, including &lt;a href="https://mitibmwatsonailab.mit.edu/"&gt;MIT&lt;/a&gt;, &lt;a href="https://ai.cs.cmu.edu/"&gt;Carnegie Mellon&lt;/a&gt;, and &lt;a href="https://ai.stanford.edu/"&gt;Stanford&lt;/a&gt;.&lt;br&gt;
One of the most significant legacies of the Dartmouth Conference is the development of the &lt;a href="https://en.wikipedia.org/wiki/Turing_test"&gt;Turing test&lt;/a&gt;. &lt;a href="https://en.wikipedia.org/wiki/Alan_Turing"&gt;Alan Turing&lt;/a&gt;, a British mathematician, proposed the idea of a test to determine whether a machine could exhibit intelligent behaviour indistinguishable from a human. This concept was discussed at the conference and became a central idea in the field of AI research. The Turing test remains an important benchmark for measuring the progress of AI research today.&lt;/p&gt;

&lt;p&gt;The Dartmouth Conference was a pivotal event in the history of AI. It established AI as a field of study, set out a roadmap for research, and sparked a wave of innovation in the field. The conference's legacy can be seen in the development of AI programming languages, research labs, and the Turing test.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Perceptron
&lt;/h2&gt;

&lt;p&gt;The Perceptron is an Artificial neural network architecture designed by Psychologist, &lt;a href="https://en.wikipedia.org/wiki/Frank_Rosenblatt"&gt;Frank Rosenblatt&lt;/a&gt; in the year 1958 and it gave traction to what is famously known as the &lt;strong&gt;Brain Inspired Approach to AI&lt;/strong&gt;, where researchers build AI systems to mimic the human brain. &lt;/p&gt;

&lt;p&gt;In technical terms, &lt;a href="https://en.wikipedia.org/wiki/Perceptro"&gt;the Perceptron&lt;/a&gt; is a binary classifier that can learn to classify input patterns into two categories. It works by taking a set of input values and computing a weighted sum of those values, followed by a threshold function that determines whether the output is 1 or 0. The weights are adjusted during the training process to optimize the performance of the classifier.&lt;/p&gt;

&lt;p&gt;The Perceptron was seen as a major milestone in AI because it demonstrated the potential of machine learning algorithms to mimic human intelligence. It showed that machines could learn from experience and improve their performance over time, much like humans do. The Perceptron was also significant because it was the next major milestone after the Dartmouth conference. The conference had generated a lot of excitement about the potential of AI, but it was still largely a theoretical concept. The Perceptron, on the other hand, was a practical implementation of AI that showed that the concept could be turned into a working system.&lt;/p&gt;

&lt;p&gt;The Perceptron was initially touted as a breakthrough in AI and received a lot of attention from the media. However, it was later discovered that the algorithm had limitations, particularly when it came to classifying complex data. This led to a decline in interest in the Perceptron and AI research in general in the late 1960s and 1970s.&lt;/p&gt;

&lt;p&gt;However, the Perceptron was later revived and incorporated into more complex neural networks, leading to the development of deep learning and other forms of modern machine learning. Today, the Perceptron is seen as an important milestone in the history of AI and continues to be studied and used in research and development of new AI technologies.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI Boom of the 1960s
&lt;/h2&gt;

&lt;p&gt;As we spoke about earlier, the 1950s was a momentous year for the AI community due to the creation and popularisation of the Perceptron artificial neural network, the Perceptron was seen as  a breakthrough in AI research and sparked a great deal of interest in the field and said interest was a stimulant for what became known as the &lt;em&gt;AI BOOM&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The AI boom of the 1960s was a period of significant progress and interest in the development of artificial intelligence (AI). This was due to the fact that it was a time when computer scientists and researchers were exploring new methods for creating intelligent machines and programming them to perform tasks traditionally thought to require human intelligence. In the 1960s, the obvious flaws of the perceptron were discovered and so researchers began to explore other AI approaches beyond the Perceptron. They focused on areas such as symbolic reasoning, natural language processing, and machine learning. &lt;/p&gt;

&lt;p&gt;This research led to the development of new programming languages and tools, such as &lt;a href="https://en.wikipedia.org/wiki/Lisp_(programming_language)"&gt;LISP&lt;/a&gt; and &lt;a href="https://en.wikipedia.org/wiki/Prolog"&gt;Prolog&lt;/a&gt;, that were specifically designed for AI applications. These new tools made it easier for researchers to experiment with new AI techniques and to develop more sophisticated AI systems, during this time, the US government also became interested in AI and began funding research projects through agencies such as the &lt;a href="https://en.wikipedia.org/wiki/DARPA"&gt;Defence Advanced Research Projects Agency (DARPA)&lt;/a&gt;. This funding helped to accelerate the development of AI and provided researchers with the resources they needed to tackle increasingly complex problems.&lt;/p&gt;

&lt;p&gt;The AI boom of the 1960s culminated in the development of several landmark AI systems. One example is the &lt;a href="https://www.oreilly.com/library/view/artificial-intelligence-with/9781786464392/ch01s08.html"&gt;General Problem Solver (GPS)&lt;/a&gt;, which was created by Herbert Simon, J.C. Shaw, and Allen Newell. GPS was an early AI system that could solve problems by searching through a space of possible solutions. Another example is the &lt;a href="https://en.wikipedia.org/wiki/ELIZA"&gt;ELIZA program&lt;/a&gt;, created by Joseph Weizenbaum, which was a natural language processing program that simulated a psychotherapist. &lt;/p&gt;

&lt;p&gt;In summary, the AI boom of the 1960s was a period of significant progress in AI research and development. It was a time when researchers explored new AI approaches and developed new programming languages and tools specifically designed for AI applications. This research led to the development of several landmark AI systems that paved the way for future AI development.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI Winter of the 1980s
&lt;/h2&gt;

&lt;p&gt;The AI Winter of the 1980s refers to a period of time when research and development in the field of Artificial Intelligence (AI) experienced a significant slowdown. This period of stagnation occurred after a decade of significant progress in AI research and development from the year 1974 to 1993.&lt;/p&gt;

&lt;p&gt;As discussed in the past section, the AI boom of the 1960s was characteried by an explosion in AI research and applications but immediately following that came the AI winter which occurred during the 1980s, this was due to the fact that many of the AI projects that had been developed during the AI boom were failing to deliver on their promises, and the AI research community was becoming increasingly disillusioned with the lack of progress in the field. This led to a funding cut, and many AI researchers were forced to abandon their projects and leave the field altogether.&lt;/p&gt;

&lt;p&gt;According to the &lt;a href="https://www.actuaries.digital/2018/09/05/history-of-ai-winters/"&gt;Lighthill report&lt;/a&gt; from the UK science research commission,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI has failed to achieve it's grandiose objectives and in no part of the field have the discoveries made so far produced the major impact that was then promised.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The AI Winter of the 1980s was characterised by a significant decline in funding for AI research and a general lack of interest in the field among investors and the public. This led to a significant decline in the number of AI projects being developed, and many of the research projects that were still active were unable to make significant progress due to a lack of resources.&lt;/p&gt;

&lt;p&gt;Despite the challenges of the AI Winter, the field of AI did not disappear entirely. Some researchers continued to work on AI projects and make important advancements during this time, including the development of neural networks and the beginnings of machine learning. However, progress in the field was slow, and it was not until the 1990s that interest in AI began to pick up again (we are coming to that).&lt;/p&gt;

&lt;p&gt;Overall, the AI Winter of the 1980s was a significant milestone in the history of AI, as it demonstrated the challenges and limitations of AI research and development. It also served as a cautionary tale for investors and policymakers, who realised that the hype surrounding AI could sometimes be overblown and that progress in the field would require sustained investment and commitment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Expert Systems
&lt;/h2&gt;

&lt;p&gt;Expert systems are a type of artificial intelligence (AI) technology that was developed in the 1980s. Expert systems are designed to mimic the decision-making abilities of a human expert in a specific domain or field, such as medicine, finance, or engineering. During the 1960s and early 1970s, there was a lot of optimism and excitement around AI and its potential to revolutionise various industries. However, as we discussed in the past section, this enthusiasm was dampened by a period known as the AI winter, which was characterised by a lack of progress and funding for AI research.&lt;/p&gt;

&lt;p&gt;The development of expert systems marked a turning point in the history of AI, as pressure was mounted on the AI community to provide practical, scalable, robust, and  quantifiable applications of Artificial Intelligence, expert systems served as proof that AI systems could be used in real life systems and had the potential to provide significant benefits to businesses and industries. Expert systems were used to automate decision-making processes in various domains, from diagnosing medical conditions to predicting stock prices. &lt;/p&gt;

&lt;p&gt;In technical terms, expert systems are typically composed of a knowledge base, which contains information about a particular domain, and an inference engine, which uses this information to reason about new inputs and make decisions. Expert systems also incorporate various forms of reasoning, such as deduction, induction, and abduction, to simulate the decision-making processes of human experts.&lt;/p&gt;

&lt;p&gt;Overall, expert systems were a significant milestone in the history of AI, as they demonstrated the practical applications of AI technologies and paved the way for further advancements in the field. Today, expert systems continue to be used in various industries, and their development has led to the creation of other AI technologies, such as machine learning and natural language processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The emergence of NLPs and Computer Vision in the 1990s
&lt;/h2&gt;

&lt;p&gt;This period is when AI research and globalization begins to pick up some momentum and  it is also the entry into the modern era of Artificial Intelligence. As discussed in the previous section, expert systems came into play around the late 1980s and early 1990s, However, expert systems were limited by the fact that they relied on structured data and rules-based logic. They struggled to handle unstructured data, such as natural language text or images, which are inherently ambiguous and context-dependent.&lt;/p&gt;

&lt;p&gt;To address this limitation, researchers began to develop techniques for processing natural language and visual information. In the 1970s and 1980s, significant progress was made in the development of rule-based systems for NLP and Computer Vision. However, these systems were still limited by the fact that they relied on pre-defined rules and were not capable of learning from data.&lt;/p&gt;

&lt;p&gt;In the 1990s, advances in machine learning algorithms and computing power led to the development of more sophisticated NLP and Computer Vision systems. Researchers began to use statistical methods to learn patterns and features directly from data, rather than relying on pre-defined rules. This approach, known as machine learning, allowed for more accurate and flexible models for processing natural language and visual information.&lt;/p&gt;

&lt;p&gt;One of the most significant milestones of this era was the development of the &lt;a href="https://en.wikipedia.org/wiki/Hidden_Markov_model"&gt;Hidden Markov Model&lt;/a&gt; (HMM), which allowed for probabilistic modelling of natural language text. This led to significant advances in speech recognition, language translation, and text classification, similarly, in the field of Computer Vision, the emergence of Convolutional Neural Networks (CNNs) allowed for more accurate object recognition and image classification. These techniques are now used in a wide range of applications, from self-driving cars to medical imaging.&lt;/p&gt;

&lt;p&gt;Overall, the emergence of NLP and Computer Vision in the 1990s represented a major milestone in the history of AI, as it allowed for more sophisticated and flexible processing of unstructured data. These techniques continue to be a focus of research and development in AI today, as they have significant implications for a wide range of industries and applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rise of Big Data
&lt;/h2&gt;

&lt;p&gt;The concept of big data has been around for decades, but its rise to prominence in the context of artificial intelligence (AI) can be traced back to the early 2000s. For the sake of a sense of completion, let's briefly discuss the term Big Data.&lt;/p&gt;

&lt;p&gt;For data to be termed &lt;em&gt;big&lt;/em&gt;, it needs to fulfill 3 core attribute, Volume, Velocity, and Variety. &lt;br&gt;
Volume refers to the sheer size of the data set, which can range from terabytes to petabytes or even larger. Velocity refers to the speed at which the data is generated and needs to be processed. For example, data from social media or IoT devices can be generated in real-time and needs to be processed quickly. Variety refers to the diverse types of data that are generated, including structured, unstructured, and semi-structured data.&lt;/p&gt;

&lt;p&gt;Before the emergence of big data, AI was limited by the amount and quality of data that was available for training and testing machine learning algorithms. Natural language processing (NLP) and computer vision were two areas of AI that saw significant progress in the 1990s, but they were still limited by the amount of data that was available. For example, early NLP systems were based on hand-crafted rules, which were limited in their ability to handle the complexity and variability of natural language. The rise of big data changed this by providing access to massive amounts of data from a wide variety of sources, including social media, sensors, and other connected devices. This allowed machine learning algorithms to be trained on much larger datasets, which in turn enabled them to learn more complex patterns and make more accurate predictions.&lt;/p&gt;

&lt;p&gt;At the same time, advances in data storage and processing technologies, such as Hadoop and Spark, made it possible to process and analyze these large datasets quickly and efficiently. This led to the development of new machine learning algorithms, such as deep learning, which are capable of learning from massive amounts of data and making highly accurate predictions.&lt;/p&gt;

&lt;p&gt;Today, big data continues to be a driving force behind many of the latest advances in AI, from autonomous vehicles and personalised medicine to natural language understanding and recommendation systems. As the amount of data being generated continues to grow exponentially, the role of big data in AI will only become more important in the years to come.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep Learning
&lt;/h2&gt;

&lt;p&gt;The emergence of &lt;a href="https://en.wikipedia.org/wiki/Deep_learning"&gt;Deep Learning&lt;/a&gt; is a major milestone in the globalisation of modern Artificial Intelligence, ever since the Dartmouth Conference of the 1950s, AI has been recognised as a legitimate field of study and the early years of AI research focused on symbolic logic and rule-based systems, which involved manually programming machines to make decisions based on a set of predetermined rules. While these systems were useful in certain applications, they were limited in their ability to learn and adapt to new data.&lt;/p&gt;

&lt;p&gt;It wasn't until after the rise of big data that deep learning became a major milestone in the history of AI. With the exponential growth of data, researchers needed new ways to process and extract insights from vast amounts of information. Deep learning algorithms provided a solution to this problem by enabling machines to automatically learn from large datasets and make predictions or decisions based on that learning.&lt;/p&gt;

&lt;p&gt;Deep learning is a type of machine learning that uses artificial neural networks, which are modelled after the structure and function of the human brain. These networks are made up of layers of interconnected nodes, each of which performs a specific mathematical function on the input data. The output of one layer serves as the input to the next, allowing the network to extract increasingly complex features from the data.&lt;/p&gt;

&lt;p&gt;One of the key advantages of deep learning is its ability to learn hierarchical representations of data. This means that the network can automatically learn to recognise patterns and features at different levels of abstraction. For example, a deep learning network might learn to recognise the shapes of individual letters, then the structure of words, and finally the meaning of sentences.&lt;br&gt;
The development of deep learning has led to significant breakthroughs in fields such as computer vision, speech recognition, and natural language processing. For example, deep learning algorithms are now able to accurately classify images, recognise speech, and even generate realistic human-like language.&lt;/p&gt;

&lt;p&gt;In conclusion, deep learning represents a major milestone in the history of AI, made possible by the rise of big data. Its ability to automatically learn from vast amounts of information has led to significant advances in a wide range of applications, and it is likely to continue to be a key area of research and development in the years to come.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generative AI
&lt;/h2&gt;

&lt;p&gt;This is the point in the AI timeline where we currently dwell as a species. Generative AI is a subfield of artificial intelligence (AI) that involves creating AI systems capable of generating new data or content that is similar to data it was trained on. This can include generating images, text, music, and even videos.&lt;/p&gt;

&lt;p&gt;In the context of the history of AI, generative AI can be seen as a major milestone that came after the rise of deep learning. Deep learning is a subset of machine learning that involves using neural networks with multiple layers to analyse and learn from large amounts of data. It has been incredibly successful in tasks such as image and speech recognition, natural language processing, and even &lt;a href="https://techcrunch.com/2016/03/15/google-ai-beats-go-world-champion-again-to-complete-historic-4-1-series-victory"&gt;playing complex games such as Go&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Transformers, a type of neural network architecture, have revolutionised generative AI. They were introduced in &lt;a href="https://arxiv.org/abs/1706.03762"&gt;a paper by Vaswani et al&lt;/a&gt;. in 2017 and have since been used in various tasks, including natural language processing, image recognition, and speech synthesis. Transformers use self-attention mechanisms to analyse the relationships between different elements in a sequence, allowing them to generate more coherent and nuanced output. This has led to the development of large language models such as GPT-4(ChatGPT), which can generate human-like text on a wide range of topics. &lt;/p&gt;

&lt;p&gt;AI art is another area where generative AI has had a significant impact. By training deep learning models on large datasets of artwork, generative AI can create new and unique pieces of art. The use of generative AI in art has sparked debate about the nature of creativity and authorship, as well as the ethics of using AI to create art. Some argue that AI-generated art is not truly creative because it lacks the intentionality and emotional resonance of human-made art. Others argue that AI art has its own value and can be used to explore new forms of creativity. &lt;/p&gt;

&lt;p&gt;Large language models such as GPT-4 have also been used in the field of creative writing, with some authors using them to generate new text or as a tool for inspiration. This has raised questions about the future of writing and the role of AI in the creative process. While some argue that AI-generated text lacks the depth and nuance of human writing, others see it as a tool that can enhance human creativity by providing new ideas and perspectives.&lt;/p&gt;

&lt;p&gt;In summary, generative AI, especially with the help of Transformers and large language models, has the potential to revolutionise many areas, from art to writing to simulation. While there are still debates about the nature of creativity and the ethics of using AI in these areas, it is clear that generative AI is a powerful tool that will continue to shape the future of technology and the arts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;As we have covered, the history of Artificial Intelligence has been a very interesting one wrought with potential, anti-climaxes and phenomenal breakthroughs but in a sense, with applications like ChatGPT, Dalle.E, and others, we have only just scratched the surface of the possible applications of AI, and of course the challenges, there is definitely more to come and I implore all of us to keep an open mind and be definitely Optimistic while being indefinitely pessimistic.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>datascience</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to code in Python(using paradigms)</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Wed, 15 Feb 2023 23:51:39 +0000</pubDate>
      <link>https://dev.to/playfulprogramming/how-to-code-in-pythonusing-paradigms-4eo</link>
      <guid>https://dev.to/playfulprogramming/how-to-code-in-pythonusing-paradigms-4eo</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Programming Paradigms are the different approaches to solving computational problems through programming.  &lt;/p&gt;

&lt;p&gt;In this article, we will be talking about programming Paradigms, why they're an important part of programming, the different programming paradigms that can be applied using python, and how to apply them. &lt;/p&gt;

&lt;h2&gt;
  
  
  Programming Paradigms
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zirg5vi90e31x83dr3t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zirg5vi90e31x83dr3t.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we delve into programming paradigms, it is crucial to understand the meaning of Paradigms in its basic form,  unrelated to computer science, paradigms are essentially the models, guidelines or patterns by which certain objectives are achieved, analogically, they can be likened to how &lt;a href="https://en.wikipedia.org/wiki/Scaffolding" rel="noopener noreferrer"&gt;scaffolding&lt;/a&gt; serves as the basic structure for buildings.&lt;/p&gt;

&lt;p&gt;&lt;a href=""&gt;Programming Paradigms&lt;/a&gt; are the different styles which a program can be written in a certain programming language, they are the different ways in which code in a given programming language (like Python, Java, JavaScript, etc) can be organised.&lt;/p&gt;

&lt;p&gt;In simple words, every programming language has a special way (methodologies) in which it's code can be structured and run and these are called programming paradigms, some programming languages only support the use of one paradigm, these are called &lt;em&gt;single paradigm languages&lt;/em&gt; while others support multiple paradigms, these are called &lt;em&gt;multi paradigm languages&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;There are 2 core classifications of Programming Paradigms, and they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Imperative:&lt;/strong&gt; In this technique, the programmer specifies how to solve a particular program in the code, examples of paradigms that follow this technique include; &lt;em&gt;Procedural&lt;/em&gt; and &lt;em&gt;Object Oriented Programming&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Declarative:&lt;/strong&gt; In this technique, the programmer only declares the problem to be solved without explicitly describing how it is to be solved, examples of paradigms that follow this technique are; &lt;em&gt;Functional Programming&lt;/em&gt;,  &lt;em&gt;reactive programming&lt;/em&gt;, etc&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Python Paradigms
&lt;/h2&gt;

&lt;p&gt;As stated earlier, every programming language has it's paradigm style or styles(depending on it's architecture), for example, perhaps the most popular programming language in the imperative division is &lt;a href=""&gt;C&lt;/a&gt; which was designed as a single paradigm language(supporting only Imperative programming styles), on the other hand, &lt;a href="https://en.wikipedia.org/wiki/Haskell" rel="noopener noreferrer"&gt;Haskell&lt;/a&gt;, while being obscure, is a great example of a language which supports Declarative programming.&lt;/p&gt;

&lt;p&gt;As stated earlier on, the focus of this article will be python and the paradigms it supports. Python is a multi paradigm language as it support procedural,  Functional, and Declarative paradigms. &lt;/p&gt;

&lt;p&gt;In this article we will explain Python's two most popular paradigms (Object Oriented Programming and Functional programming) and present their code implementations. &lt;/p&gt;

&lt;h3&gt;
  
  
  Functional Programming
&lt;/h3&gt;

&lt;p&gt;In simple words, Functional programming is a programming paradigm which emphasizes on writing code in little reusable blocks called functions, rather than the changing of states, it takes in one or more inputs and performs computations on them based on a pre-written set of rules and then returns an output without changing any of the data outside the function. It is declarative by nature. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here is an example of a simple function and its usage&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#defining the function
def add(x, y):
  ans = x+y
  return ans

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#using the function
a = 2

b = 3

ans = add(a,b)

print(ans)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Higher order functions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Higher order functions are basically functions which have other functions passed as arguments, for example using our other example;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#defining a higher order function

def answer(func, x, y):
  res = func(x,y)
  print(res)


#using higher order functions
answer(add, a, b)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Object Oriented Programming (OOPs)
&lt;/h3&gt;

&lt;p&gt;Object Oriented Programming, is a programming paradigm which deals with code as objects; giving it properties and behaviours. For example, a house is an object with properties like; color, a door, roof, stairs, etc. &lt;/p&gt;

&lt;p&gt;In Object Oriented Programming, objects are defined with their properties and behaviours, now the interactions between the properties and behaviours of these objects is what creates the logic for a computer program. &lt;/p&gt;

&lt;p&gt;Analogically, you can think of Object Oriented Programming(OOPs) as &lt;a href="https://en.wikipedia.org/wiki/Lego" rel="noopener noreferrer"&gt;Lego bricks&lt;/a&gt;, combining the bricks in diffrent ways produces different objects, combining them in a certain way can get you a house, combining them in another way can get you a car, in the same manner, combining properties of objects in a certain way creates a logic in the code which generates a certain output. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvexrs920r11eu0bt05xv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvexrs920r11eu0bt05xv.jpg" alt=" " width="220" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Classes&lt;/strong&gt;&lt;br&gt;
Classes are a crucial part of Object Oriented Programming, they are essentially a set of functions that define and construct the structure of objects, you can think of them as a blueprint for a building in the sense that they outline how a structure should be built.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Magic Methods&lt;/strong&gt;&lt;br&gt;
Magic methods(also called Dunder methods), they are essentially used to create customisation for objects in classes, they start and end with underscores.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Code implementation&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 # Creating the class
class Pet(object):
    """Class object for a pet."""

    def __init__(self, species, name):
        """Initialise a Pet."""
        self.species = species
        self.name = name

    def __str__(self):
        """Output when printing an instance of a Pet."""
        return f"{self.species} named {self.name}"


# Creating an instance of the class
my_dog = Pet(species="dog",
             name="Breakthrough")
print (my_dog)
print (my_dog.name)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we created a class for &lt;em&gt;pets&lt;/em&gt;, created the magic methods &lt;em&gt;init &amp;amp; str&lt;/em&gt; which help in initialising instances of the class and parsing strings for output, respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Object Functions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Objects can also have functions which provide certain behaviour for the objects.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;code implementation&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 def change_name(self, new_name):
        """Change the name of your Pet."""
        self.name = new_name

# Using the change_name function
my_dog.change_name(new_name="Chivoma")
print (my_dog)
print (my_dog.name)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we created a  function which made it possible to change the name of the pet object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inheritance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is another really interesting feature of OOPs. Inheritance makes it possible to inherit the behaviours and properties of a certain class and implement them in another class, this makes it possible to build classes on-top of each other. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;code implementation&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#creating class
class Dog(Pet):
    def __init__(self, name, breed):
        super().__init__(species="dog", name=name)
        self.breed = breed

    def __str__(self):
        return f"A {self.breed} dog named {self.name}"

#creating an instance of the class
Ariel = Dog(species="dog", breed="Great Dane", name="Ariel")
print (Ariel)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we created a class named &lt;em&gt;Dog&lt;/em&gt; which inherited from the &lt;em&gt;Pet&lt;/em&gt; class.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In conclusion, Paradigms are crucial in designing the structure, organisation, flow, and logic of computer software and every programming language has paradigms it supports. A language can either support one paradigm (single paradigm) or more than one(multi paradigm).&lt;/p&gt;

&lt;p&gt;Two of the most popular paradigms supported by Python are Functional Programming and Object Oriented Programming(OOPs). &lt;/p&gt;

&lt;p&gt;Functional Programming deals with writing code in little reusable blocks called functions rather than the changing of states, it takes in one or more inputs performs computation on them based on a pre-written set of rules and then returns an output without changing any of the data outside the function. It is declarative by nature.&lt;/p&gt;

&lt;p&gt;On the other hand, Object Oriented Programming deals with code as objects; giving it properties and behaviours. For example, a house is an object with properties like; color, a door, roof, stairs, etc. &lt;/p&gt;

&lt;p&gt;Due to its vast application in AI and Data Science/Engineering, Python continues to grow in popularity, as such, it is important for developers to have a solid understanding of these programming paradigms in order to write efficient and maintainable code. &lt;/p&gt;

&lt;p&gt;By embracing functional programming and OOP principles, developers can create software that is scalable, modular, and easy to maintain, ensuring that their code is well-structured, optimised, and future-proof.&lt;/p&gt;

</description>
      <category>welcome</category>
      <category>career</category>
      <category>community</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>How to code in Python (using Paradigms)</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Wed, 15 Feb 2023 23:50:27 +0000</pubDate>
      <link>https://dev.to/edemgold/how-to-code-in-python-using-paradigms-4plk</link>
      <guid>https://dev.to/edemgold/how-to-code-in-python-using-paradigms-4plk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Programming Paradigms are the different approaches to solving computational problems through programming.  &lt;/p&gt;

&lt;p&gt;In this article, we will be talking about programming Paradigms, why they're an important part of programming, the different programming paradigms that can be applied using python, and how to apply them. &lt;/p&gt;

&lt;h2&gt;
  
  
  Programming Paradigms
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zirg5vi90e31x83dr3t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zirg5vi90e31x83dr3t.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we delve into programming paradigms, it is crucial to understand the meaning of Paradigms in its basic form,  unrelated to computer science, paradigms are essentially the models, guidelines or patterns by which certain objectives are achieved, analogically, they can be likened to how &lt;a href="https://en.wikipedia.org/wiki/Scaffolding" rel="noopener noreferrer"&gt;scaffolding&lt;/a&gt; serves as the basic structure for buildings.&lt;/p&gt;

&lt;p&gt;&lt;a href=""&gt;Programming Paradigms&lt;/a&gt; are the different styles which a program can be written in a certain programming language, they are the different ways in which code in a given programming language (like Python, Java, JavaScript, etc) can be organised.&lt;/p&gt;

&lt;p&gt;In simple words, every programming language has a special way (methodologies) in which it's code can be structured and run and these are called programming paradigms, some programming languages only support the use of one paradigm, these are called &lt;em&gt;single paradigm languages&lt;/em&gt; while others support multiple paradigms, these are called &lt;em&gt;multi paradigm languages&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;There are 2 core classifications of Programming Paradigms, and they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Imperative:&lt;/strong&gt; In this technique, the programmer specifies how to solve a particular program in the code, examples of paradigms that follow this technique include; &lt;em&gt;Procedural&lt;/em&gt; and &lt;em&gt;Object Oriented Programming&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Declarative:&lt;/strong&gt; In this technique, the programmer only declares the problem to be solved without explicitly describing how it is to be solved, examples of paradigms that follow this technique are; &lt;em&gt;Functional Programming&lt;/em&gt;,  &lt;em&gt;reactive programming&lt;/em&gt;, etc&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Python Paradigms
&lt;/h2&gt;

&lt;p&gt;As stated earlier, every programming language has it's paradigm style or styles(depending on it's architecture), for example, perhaps the most popular programming language in the imperative division is &lt;a href=""&gt;C&lt;/a&gt; which was designed as a single paradigm language(supporting only Imperative programming styles), on the other hand, &lt;a href="https://en.wikipedia.org/wiki/Haskell" rel="noopener noreferrer"&gt;Haskell&lt;/a&gt;, while being obscure, is a great example of a language which supports Declarative programming.&lt;/p&gt;

&lt;p&gt;As stated earlier on, the focus of this article will be python and the paradigms it supports. Python is a multi paradigm language as it support procedural,  Functional, and Declarative paradigms. &lt;/p&gt;

&lt;p&gt;In this article we will explain Python's two most popular paradigms (Object Oriented Programming and Functional programming) and present their code implementations. &lt;/p&gt;

&lt;h3&gt;
  
  
  Functional Programming
&lt;/h3&gt;

&lt;p&gt;In simple words, Functional programming is a programming paradigm which emphasizes on writing code in little reusable blocks called functions, rather than the changing of states, it takes in one or more inputs and performs computations on them based on a pre-written set of rules and then returns an output without changing any of the data outside the function. It is declarative by nature. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here is an example of a simple function and its usage&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#defining the function
def add(x, y):
  ans = x+y
  return ans

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#using the function
a = 2

b = 3

ans = add(a,b)

print(ans)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Higher order functions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Higher order functions are basically functions which have other functions passed as arguments, for example using our other example;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#defining a higher order function

def answer(func, x, y):
  res = func(x,y)
  print(res)


#using higher order functions
answer(add, a, b)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Object Oriented Programming (OOPs)
&lt;/h3&gt;

&lt;p&gt;Object Oriented Programming, is a programming paradigm which deals with code as objects; giving it properties and behaviours. For example, a house is an object with properties like; color, a door, roof, stairs, etc. &lt;/p&gt;

&lt;p&gt;In Object Oriented Programming, objects are defined with their properties and behaviours, now the interactions between the properties and behaviours of these objects is what creates the logic for a computer program. &lt;/p&gt;

&lt;p&gt;Analogically, you can think of Object Oriented Programming(OOPs) as &lt;a href="https://en.wikipedia.org/wiki/Lego" rel="noopener noreferrer"&gt;Lego bricks&lt;/a&gt;, combining the bricks in diffrent ways produces different objects, combining them in a certain way can get you a house, combining them in another way can get you a car, in the same manner, combining properties of objects in a certain way creates a logic in the code which generates a certain output. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvexrs920r11eu0bt05xv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvexrs920r11eu0bt05xv.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Classes&lt;/strong&gt;&lt;br&gt;
Classes are a crucial part of Object Oriented Programming, they are essentially a set of functions that define and construct the structure of objects, you can think of them as a blueprint for a building in the sense that they outline how a structure should be built.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Magic Methods&lt;/strong&gt;&lt;br&gt;
Magic methods(also called Dunder methods), they are essentially used to create customisation for objects in classes, they start and end with underscores.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Code implementation&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 # Creating the class
class Pet(object):
    """Class object for a pet."""

    def __init__(self, species, name):
        """Initialise a Pet."""
        self.species = species
        self.name = name

    def __str__(self):
        """Output when printing an instance of a Pet."""
        return f"{self.species} named {self.name}"


# Creating an instance of the class
my_dog = Pet(species="dog",
             name="Breakthrough")
print (my_dog)
print (my_dog.name)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we created a class for &lt;em&gt;pets&lt;/em&gt;, created the magic methods &lt;em&gt;init &amp;amp; str&lt;/em&gt; which help in initialising instances of the class and parsing strings for output, respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Object Functions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Objects can also have functions which provide certain behaviour for the objects.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;code implementation&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 def change_name(self, new_name):
        """Change the name of your Pet."""
        self.name = new_name

# Using the change_name function
my_dog.change_name(new_name="Chivoma")
print (my_dog)
print (my_dog.name)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we created a  function which made it possible to change the name of the pet object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inheritance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is another really interesting feature of OOPs. Inheritance makes it possible to inherit the behaviours and properties of a certain class and implement them in another class, this makes it possible to build classes on-top of each other. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;code implementation&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#creating class
class Dog(Pet):
    def __init__(self, name, breed):
        super().__init__(species="dog", name=name)
        self.breed = breed

    def __str__(self):
        return f"A {self.breed} dog named {self.name}"

#creating an instance of the class
Ariel = Dog(species="dog", breed="Great Dane", name="Ariel")
print (Ariel)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we created a class named &lt;em&gt;Dog&lt;/em&gt; which inherited from the &lt;em&gt;Pet&lt;/em&gt; class.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In conclusion, Paradigms are crucial in designing the structure, organisation, flow, and logic of computer software and every programming language has paradigms it supports. A language can either support one paradigm (single paradigm) or more than one(multi paradigm).&lt;/p&gt;

&lt;p&gt;Two of the most popular paradigms supported by Python are Functional Programming and Object Oriented Programming(OOPs). &lt;/p&gt;

&lt;p&gt;Functional Programming deals with writing code in little reusable blocks called functions rather than the changing of states, it takes in one or more inputs performs computation on them based on a pre-written set of rules and then returns an output without changing any of the data outside the function. It is declarative by nature.&lt;/p&gt;

&lt;p&gt;On the other hand, Object Oriented Programming deals with code as objects; giving it properties and behaviours. For example, a house is an object with properties like; color, a door, roof, stairs, etc. &lt;/p&gt;

&lt;p&gt;Due to its vast application in AI and Data Science/Engineering, Python continues to grow in popularity, as such, it is important for developers to have a solid understanding of these programming paradigms in order to write efficient and maintainable code. &lt;/p&gt;

&lt;p&gt;By embracing functional programming and OOP principles, developers can create software that is scalable, modular, and easy to maintain, ensuring that their code is well-structured, optimised, and future-proof.&lt;/p&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>machinelearning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Data Science vs Data Engineering</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Thu, 19 Jan 2023 07:48:05 +0000</pubDate>
      <link>https://dev.to/playfulprogramming/what-sets-data-science-and-data-engineering-apart-a-guide-2dc3</link>
      <guid>https://dev.to/playfulprogramming/what-sets-data-science-and-data-engineering-apart-a-guide-2dc3</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Data is the new oil. It’s valuable, but if unrefined it cannot really be used." -Clive Humby&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I recently became very interested in Data Science and Data Engineering; how they compare and complement. I initially assumed Data Engineering was a sub set of Data Science but after extensive research I found out just how much the two fields differ. &lt;/p&gt;

&lt;p&gt;In this article, I hope to discuss the difference, and complements of data science and Data Engineering. &lt;/p&gt;

&lt;h2&gt;
  
  
  Data
&lt;/h2&gt;

&lt;p&gt;To fully understand the relationship between Data Science and Data Engineering, you have to understand the one thing that links them both; Data. &lt;/p&gt;

&lt;p&gt;Data is a word that has become commonplace in today's society, with so many reports of &lt;a href="https://www.statista.com/statistics/1307426/number-of-data-breaches-worldwide" rel="noopener noreferrer"&gt;data leaks&lt;/a&gt;,&lt;a href="https://www.security.org/resources/data-tech-companies-have/" rel="noopener noreferrer"&gt;the innapropriate collection of data by big tech companies&lt;/a&gt;, and so on.&lt;/p&gt;

&lt;p&gt;Data is information that is collected and stored in a format that can be processed by a computer. It can be in various forms such as numbers, text, images, and videos, and it can be collected, stored, and analyzed to extract insights and inform decisions. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now why do so many companies want data and what's so special about it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Data is important to companies because it allows them to make informed decisions about their operations and strategies. By analyzing data, companies can gain insights into the behaviour of their users, and insights gotten from their users can then be used to make their products way more effcient and useful for users. &lt;/p&gt;

&lt;p&gt;Data scientists and engineers are the people responsible for collecting the data, making it useful, analysing it, gaining insights &amp;amp; trends from it, and passing on the information mined to the management in order to permit informed decision making. Now let's see how they differ.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Science
&lt;/h2&gt;

&lt;p&gt;Data Science was themed the &lt;em&gt;The Sexiest Job of the 21st Century&lt;/em&gt; by the &lt;a href="https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century" rel="noopener noreferrer"&gt;Harvard Businees Review&lt;/a&gt; and it's claim to the title is arguably legitimate. &lt;/p&gt;

&lt;p&gt;Data Science is the process of using scientific methods, algorithms, and systems to analyse and extract value from data. &lt;/p&gt;

&lt;p&gt;In other words, the data scientist is the individual responsible for gaining insights from data and making abstract mathematical models from the data in order to enable prediction.&lt;/p&gt;

&lt;p&gt;Now let us look at the data engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Engineering
&lt;/h2&gt;

&lt;p&gt;Data Engineering is the process of designing, constructing and maintaining the pipelines and infrastructure that collect, store, process and analyze data. &lt;/p&gt;

&lt;p&gt;The Data Engineer is the indivdual responsible for ensuring that data required by Data Scientists to anaylse and gain insights from is available in the right and acccurate format. &lt;/p&gt;

&lt;p&gt;Data is infuriatingly complex and disordered when it is collected and in order for Data Scientists to efficiently gain inisghts from it, the data needs to be pre-processed and once insights have been made, Data Scientists then formulate an abstract mathematical model from it which is commonly known as a &lt;a href="https://learn.microsoft.com/en-us/windows/ai/windows-ml/what-is-a-machine-learning-model" rel="noopener noreferrer"&gt;Machine Learning Model&lt;/a&gt; and this said abstraction needs to be post-processed in order to be deployed and integrated into the product. All the tasks described are performed by data engineers.&lt;/p&gt;

&lt;h2&gt;
  
  
  An analogy to describe the relationship between the Data Scientist and the Data Engineer
&lt;/h2&gt;

&lt;p&gt;Imagine you placed a bet with a friend on the outcome of a football game but you wanted to cut out the luck factor, that is ever so present in uninformed guesses, and be extremely sure that the team of your choice wins the game and you win the bet.&lt;/p&gt;

&lt;p&gt;A data engineer would collect the data on the two teams involved in the bet, data points such as; &lt;em&gt;number of games won, possesion rate per game, and results of previous clashes between the two teams&lt;/em&gt;, create an ETL pipeline where the data would be collected, cleaned and stored for the data scientist. &lt;/p&gt;

&lt;p&gt;The Data Scientist would then perform something called &lt;em&gt;Predictive Analysis&lt;/em&gt; using Machine Learning; this means the data scientist would simply feed the data prepared by the data engineer into an algorithm that then generates a mathematical Absttraction called a &lt;em&gt;Machine Learning model&lt;/em&gt;, the Machine learning model then predicts the team expected to win the bet, and just like that your guess becomes less of guess and more of a data informed decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;As you can, hopefully, extrapolate from the description between Data Scientists and Engineers above, A Data Scientist is similar to a star football player and the Data Engineer like his very talented coach who keeps him fit and provides him with tactics to win a game.&lt;/p&gt;

</description>
      <category>watercooler</category>
    </item>
    <item>
      <title>Aviyel: Building Tools for Open Source Communities</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Thu, 13 Oct 2022 15:50:24 +0000</pubDate>
      <link>https://dev.to/playfulprogramming/aviyel-building-tools-for-open-source-communities-bpe</link>
      <guid>https://dev.to/playfulprogramming/aviyel-building-tools-for-open-source-communities-bpe</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Open Source projects build communities, startups build teams." -Jacob Pattara&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;As anyone who has ever maintained or contributed to an Open Source project can tell you, open source communities are the lifeblood of Open Source projects but sustaining them isn't easy, from organizing meetups/events with community members, rewarding outstanding contributions, automating certain tasks/workflows, etc&lt;/p&gt;

&lt;p&gt;The work that goes into maintaining communities is why most Open Source projects never achieve their full potential and the fear of this is why most people are reluctant to create Open Source projects in the first place.&lt;/p&gt;

&lt;p&gt;This article talks about a group of people who have discovered this unique problem and hope to solve it by building and maintaining tools that help make the task of sustaining and scaling Open Source communities easier. &lt;/p&gt;

&lt;h1&gt;
  
  
  Open Source
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Meaning of Open Source software
&lt;/h2&gt;

&lt;p&gt;Open-source software is a term that refers to publicly available source code that can be modified, deleted, added, or changed by any entity with the necessary knowledge.&lt;/p&gt;

&lt;p&gt;In other words, Open source software is software that individuals can alter and share since its architecture (code)is publicly accessible to everyone under the terms of a licensing agreement.&lt;br&gt;
 [&lt;a href="https://www.michaelasiedu.com/open-source-software-the-art-the-beauty-and-the-science" rel="noopener noreferrer"&gt;definition credits&lt;/a&gt;]&lt;/p&gt;

&lt;h2&gt;
  
  
  Meaning of Open Source communities
&lt;/h2&gt;

&lt;p&gt;An open-source community is a collection of like-minded software developers who have decided, in essence, that two heads are better than one and believe that anyone with the required skillsets &amp;amp; knowledge should be given the opportunity to build, maintain, and contribute to any software tool that they deem suitable. &lt;/p&gt;

&lt;h1&gt;
  
  
  Aviyel
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What is Aviyel?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviyel.com/" rel="noopener noreferrer"&gt;Aviyel&lt;/a&gt; is a startup focused on trying to solve the problem of building sustainable OpenSource communities.  Building, maintaining, and scaling Open Source communities is not an easy task, Aviyel wants to solve this by building tools that make it easier to efficiently build &amp;amp; sustain Open Source communities. &lt;/p&gt;

&lt;h2&gt;
  
  
  Tools built for Open Source communities
&lt;/h2&gt;

&lt;p&gt;Below we are going to talk in detail about the tools built by Aviyel.&lt;/p&gt;

&lt;p&gt;### Event Platform&lt;/p&gt;

&lt;p&gt;This is a basic video conferencing platform built specifically for hosting events for Open Source Communities. It is built basically for users who want  &lt;/p&gt;

&lt;p&gt;The platform contains a section for;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Questions &amp;amp; answers:&lt;/strong&gt; Here people viewing an event can ask the speaker questions related to the event being viewed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Polls:&lt;/strong&gt; Here the host can ask and put up specific questions as polls for the viewers to respond to, it's a great way for the event host to know the views of viewers on a particular topic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public chat forum:&lt;/strong&gt; This is a basic chat forum where viewers can interact publicly and freely with one another while the event is going on.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Workflow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj4fdv965p8nydrg9uhh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj4fdv965p8nydrg9uhh.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;br&gt;
For those unaware of the meaning of workflow, a workflow is a configurable automated process that will run one or more jobs. In simple words, workflows help you automate the little things like commenting on a pull request, thanking community members for issues raised, etc.&lt;/p&gt;

&lt;p&gt;The workflows currently available by Aviyel are; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Send Comment: Pull Request&lt;/strong&gt;: This enables the Aviyel Bot to send thank you messages to contributors for opening and working on a Pull Request. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Send Comment: Issue&lt;/strong&gt;: This enables you to send a word of encouragement to the folks who have identified and raised issues in your repositories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thread Opener&lt;/strong&gt;: This allows you to open a thread each time a conversation needs to be started for a comment on any of your Slack Channels. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Suggesting new workflows:&lt;/strong&gt; If you feel you have a workflow that's not available then you can suggest a workflow and it will be worked on by the Aviyel DevTeam.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Community Dashboard
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdblh89txefpt0c66flsz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdblh89txefpt0c66flsz.png" alt=" " width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The community dashboard provides community members with visual details about the project. Below are a few data points displayed on the dashboard; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Top Contributors:&lt;/strong&gt;As implied, this shows a ranked list of contributors with the highest merged Pull Requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Active members:&lt;/strong&gt;This shows the number of members that have been actively contributing to the project within a chosen period(either a week, month, or a year).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Members:&lt;/strong&gt;This provides an easy-to-view glance at the members of the organization and their statistics.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Reward platform
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlawajewkr8j40ah51wd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlawajewkr8j40ah51wd.png" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;br&gt;
Acknowledging and incentivizing contributors is a brilliant way to say thank you and nudge contributors to continue to contribute to your projects but sending a personal email to every contributor after merging their Pull Request while giving that rare personal touch, is inefficient. &lt;/p&gt;

&lt;p&gt;Aviyel tries to solve this by building pipelines that collect publicly available data from projects like; &lt;em&gt;Top Contributors, contributions merged, and a number of contributions per contributor.&lt;/em&gt; and assigns them seeds for each meaningful contribution (what is termed "a meaningful contribution" is determined by the project maintainers). It allows you to choose from a theme of badges available on the platform. These will be tailored, by the maintainers, to match the project's brand identity(Like you have a purple logo and your rewards are purple as well).&lt;/p&gt;

&lt;p&gt;Another cool thing is, Aviyel is working to make this platform independent i.e it can be used anywhere outside the Aviyel platform itself. The badges are built using &lt;a href="https://thedefiant.io/vitalik-soulbound-tokens" rel="noopener noreferrer"&gt;soulbound tokens&lt;/a&gt;( a new kind of token standard) and published on the blockchain so even if Aviyel goes out of business, the badges will remain. You will also soon be able to carry these rewards to other platforms like Github, slack, discord, etc.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;As a species, we only started topping the food chain when the early humans began forming hunting groups and then communities, in other words, we have proved that we are way more successful and efficient as a species, together, rather than as singular individuals. &lt;/p&gt;

&lt;p&gt;Open Source projects provide an avenue for us (builders of the digital world) to showcase what we can do together and sate that innate &amp;amp; primal need to collaborate but a huge hurdle to that is all the work required to build and support these communities, &lt;a href="https://aviyel.com/discussions" rel="noopener noreferrer"&gt;Aviyel&lt;/a&gt; discovered this and building tools to help ease this gruesome process that is building open source communities. &lt;/p&gt;

&lt;p&gt;In the same way, the early men required stones, knives, bows &amp;amp; arrows, ships, fire, etc to make their communities prosper, we as open-source developers require tools to make our communities prosper as well, Aviyel is building these tools.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>startup</category>
      <category>tools</category>
      <category>programming</category>
    </item>
    <item>
      <title>Machine Learning in a Nutshell</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Sat, 08 Oct 2022 01:29:43 +0000</pubDate>
      <link>https://dev.to/edemgold/machine-learning-in-a-nutshell-5dl5</link>
      <guid>https://dev.to/edemgold/machine-learning-in-a-nutshell-5dl5</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Machine Intelligence will be the last invention humanity will ever have to make." -Nick Bostrom&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  SUMMARY
&lt;/h2&gt;

&lt;p&gt;Machine Learning is simply the act of using data to teach computers how to do something without them being programmed.&lt;/p&gt;

&lt;p&gt;There are some special times when using Machine learning algorithms will make all the difference and there are some times when using Machine Learning will have less than desirable consequences.&lt;/p&gt;

&lt;p&gt;Machine Learning is broadly divided into categories depending on what they do, and how they perform actions. If you've wondered what machine learning is? The best time to use Machine Learning in your project, and what makes up a Machine Learning system? Well join me and let's find out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Meaning
&lt;/h2&gt;

&lt;p&gt;I'll give you two meanings for the word machine learning the easy one and the complicated one (the one you use when you want to seem like you actually know what you're talking about ;) )&lt;/p&gt;

&lt;h3&gt;
  
  
  Easy One
&lt;/h3&gt;

&lt;p&gt;Machine learning is simply the use of data to teach computers so they can perform operations without your programming them to do so.&lt;/p&gt;

&lt;h3&gt;
  
  
  Complicated One
&lt;/h3&gt;

&lt;p&gt;This is when a computer program learns from experience E with respect to some task T and some performance measure P if its performance P on Task T improves with more experience E then Machine Learning is said to have taken place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Moments when Machine Learning shines
&lt;/h2&gt;

&lt;p&gt;Machine Learning is great and all but there are some times when using it will have disadvantages rather than advantages. I won't go into that but if you're interested in that you should check out this article here. I am going to focus on the moments when using Machine Learning will make a huge difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problems for which existing solutions require a lot of hand-tuning or long a long list of rules&lt;/strong&gt;: Have you ever had a project where you have to give a long list of handwritten rules for what the computer is meant to do? A good example of this is a spam filter, imagine building a system where you have to hand-code the exact types of spam the system has to look out for, Well with Machine Learning you don't have to write log rules just give the machine data and it does the rest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complex problems for which there is no good solution at all using traditional approaches&lt;/strong&gt;: Machine Learning particularly shines in problems where the old way of doing things just doesn't work optimally. Let's take game playing for example could you imagine the number of if/else statements you'd have to write before a computer could win a particular game of chess without Machine Learning?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fluctuating environments&lt;/strong&gt;: Imagine a place where you have different scenarios, and states of appearance, let's take a game of chess for example where there are 400 possible board setups and 197,742 possible games. Machine Learning is perfect for a case like that because of its ability to adapt to new data, which in this case is the opponent's move.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting insights about complex problems and a large amount of data&lt;/strong&gt;: We all know the ability of Machine Learning to make predictions based on past events of the situation in question but did you have any idea that a Machine Learning Algorithm could actually tell you things you didn't know about your data I'll give you an example let's take for instance you own a huge retail store and you keep data about your clients (these are called instances) such as the time they purchase a lot(like during summer), what type of goods they purchase and stuff like that (those are called features or attributes jsyk) well with a simple cluster algorithm like k-methods you could group your customers into groups based on their purchase behavior, and also discover different new things about your clients based in their data. Like you could discover when a particular customer is pregnant because for the past few weeks your data shows she's been buying a lot of baby stuff (I'd name baby stuff but I don't know them) and you could send her a catalog of your new baby stuff and you'd actually make her interested. This is a blueprint of what google does with its targeted ads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Machine Learning Systems
&lt;/h2&gt;

&lt;p&gt;Machine Learning systems are divided into 3 broad categories, categorize based on a lot of things like how they take in data, etc.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Whether or not they are trained with human supervision (* Supervised Learning, Unsupervised Learning, and Reinforcement Learning )*.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Whether or not they can learn incrementally or on the fly( &lt;em&gt;Online vs Batch Learning&lt;/em&gt; )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Whether they work by simply comparing new data points to known data points or whether they detect patterns in the training data and build a predictive model, much like scientists do ( Instance-Based-learning versus Model-Based-Learning )&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>python</category>
    </item>
    <item>
      <title>Neuralink: Why should I let them put a chip in my head?</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Sat, 08 Oct 2022 01:28:15 +0000</pubDate>
      <link>https://dev.to/edemgold/neuralink-why-should-i-let-them-put-a-chip-in-my-head-4n35</link>
      <guid>https://dev.to/edemgold/neuralink-why-should-i-let-them-put-a-chip-in-my-head-4n35</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"We’re designing the first neural chip implant that will let you control a computer or mobile device anywhere you go." -Neuralink&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;BMI stands for Brain-Machine Interface.&lt;/li&gt;
&lt;li&gt;BMI is simply the ability of the brain to try and communicate with a computer using a connection or a chip.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://neuralink.com/"&gt;Neuralink&lt;/a&gt; is an American company that is involved in building advanced Brain-Machine Interface Technology.&lt;/li&gt;
&lt;li&gt;The Neuralink Chip is put into the brain using an automated precision robot so it is extremely safe.&lt;/li&gt;
&lt;li&gt;Neuralink puts a chip in your brain so it can efficiently monitor waves generated by your brain.&lt;/li&gt;
&lt;li&gt;Neuralink has an app that allows you to manipulate your phone using your brain by thinking about it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;We've all probably watched movies like Matrix, Terminator and I Robot) where there's always a crazy smart guy who creates a smart computer and eventually can't control it anymore. Then the robot sees humans and decides the best way to solve the problems humans pose would be to kill every human on earth (like that will ever work). &lt;/p&gt;

&lt;p&gt;But luckily there comes some ordinary guy whose life is suckish at the beginning of the movie. Still, then suddenly the ordinary guy 'amazingly' gets to know that computers want to kill humans. He has the responsibility to save the world (sometimes he gets powers) and so with some trepidation he goes after the robots with a suckish plan. But he almost fails because of the suckish plan or whatever and the robots win but then the hero discovers his inner strength and then can save the world and he gets to kiss the hot girl (lucky dude) he would never have been able to kiss if he was still the ordinary guy. &lt;/p&gt;

&lt;p&gt;The movie ends with the guy becoming more sure of himself and well his life changes (and all this happens within 2hrs, cool huh). Well, these days the stuff we only thought happened in movies is happening in real life from self-driving cars, and robot dogs to Siri, Google Assistant and generally AI, and well let's be honest we're all scared that our devices are going to come after us someday the same way they do in the movies (I mean after terminator who wouldn't). But I'm hoping that after reading all this, you'll know your technology will not kill you and I'll be doing that with something called BMIs......&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But First&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a BMI?
&lt;/h2&gt;

&lt;p&gt;BMI is an acronym that stands for Brain-Machine Interface also Known as Brain-Computer Interface(BCI). &lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface"&gt;Wikipedia&lt;/a&gt;, Brain-Machine Interface(BMI) "Is a direct communication pathway between an enhanced or wired brain and an external device". In simple words, BMIs are simply a machine's ability to talk to and understand commands from the human brain due to a connection between the two of them. In other words, it is simply the ability to talk to your computer using your brain.&lt;/p&gt;

&lt;p&gt;Now we are going to talk about a very popular BMI company Neuralink, what they do, how they do what they do and why.&lt;/p&gt;

&lt;h1&gt;
  
  
  Neuralink
&lt;/h1&gt;

&lt;h3&gt;
  
  
  What is Neuralink?
&lt;/h3&gt;

&lt;p&gt;Neuralink is an American BMI company- that means they create Brain-Machine Interface systems. The company was made popular because of its association with Elon Musk (Yup the billionaire, Elon Musk). According to the &lt;a href="https://neuralink.com/"&gt;company's Website&lt;/a&gt;, Neuralink is a team of exceptionally talented people who are creating the future of brain Interfaces.&lt;/p&gt;

&lt;p&gt;To fully understand Neuralink's mission, let's try to get a relatively simple grasp of how the Human Brain works.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the Brain Works
&lt;/h3&gt;

&lt;p&gt;The brain is made up of nerve cells called neurons which are responsible for processing thought. There are many types of neurons but all neurons generally have 3 parts: Dendrites, Soma and Axon. The dendrite is responsible for receiving electrical signals. The Soma is responsible for computing/processing electrical Signals. The Axon is responsible for passing out computed electrical signals to the next Dendrite in the chain.&lt;/p&gt;

&lt;p&gt;The brain communicates information through the use of electrical signals. The brain is made up of 86 billion neurons, and when processing a particular event (Action Potential) it uses a chain of neurons to process events, For instance, when you see a car coming towards you your eyes immediately send electrical signals to your brain through the optical nerve and then the brain forms a chain of neurons to make sense of the incoming signal and so the first neuron in the chain collects the signal through its Dendrites and sends it to the Soma to process the signal after the Soma finishes with its task it sends the signal to the Axon which then sends it to the Dendrite of the next neuron in the chain, the connection between Axons and Dendrites when passing on information is called a Synapse. &lt;/p&gt;

&lt;p&gt;So the entire process continues until the brain finds a Sapiotemporal Synaptic Input(that's scientific Lingo for the brain continues processing until it finds an optimal response to the signal sent to it) and then it sends signals to the necessary effectors eg, you're legs and then the brain sends a signal to your legs to run away from the oncoming car.&lt;/p&gt;

&lt;p&gt;Now we've understood how the human brain works, let us try to understand why Neuralink's BMI requires brain implants.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Does Neuralink Want to Put A Chip into our Heads
&lt;/h3&gt;

&lt;p&gt;Now you must understand that for Neuralink's BMI to work they don't actually need to put the chip into your head. But it would make the BMI technology more effective, putting a chip outside the head is like watching a football game outside the stadium. You'd hear the noises from those inside the stadium and know when something good happens but you wouldn't be able to know exactly how a player scored or missed. So putting a chip into the human head is really for no other reason apart from the clarity it gives. Because for the chip to be able to efficiently decode the electrical signals being used by the brain to communicate and then send them to the computer it has to be able to receive the exact waves being generated by the brain and this cannot be achieved efficiently when the chip is placed outside the brain.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Neuralink Puts their Chip Into our Brains
&lt;/h3&gt;

&lt;p&gt;The electrode threads in the Neuralink chip (AKA N1 chip) are extremely small (roughly 23 millimeters in diameter). So it is almost impossible for it to be surgically placed into the brain by human hands. so the guys at Neuralink (remember they are brilliant) built an automated precision robot (picture is up there) that surgically places the chip into the human brain, Now you might be wondering how many chips you need to have in your brain, a single neural link chip can cover up to 1000 brain cells and we have billions of brain cells so you do the math.&lt;/p&gt;

&lt;p&gt;Well let's checkmark our list we've understood why neuralink has to put a chip in the head, and we've gotten to understand how they put a chip in your head now the fun part.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the Neuralink Chip facilitates Brain-to-Computer/device-communication
&lt;/h3&gt;

&lt;p&gt;The Neuralink chip is placed into the parts of the brain that monitor movement. The chip contains tiny fiber threads called neural threads and each small thread contains many electrodes. The electrodes are responsible for receiving electrical signals from the brain and then sending them to the chip. The chip then turns the electrical signals it receives into the binary that the computer can understand.&lt;/p&gt;

&lt;p&gt;For instance, if you have a neuralink chip implanted into you and you want to move your computer mouse cursor all you have to do is think about moving your arm and then your brain sends electrical signals to your hands and in the process, the fiber threads in the N1 chip receive those signals using the electrodes and the chip then turns those electrical signals into binary and then sends the binary to your computer using a wireless connection and your computer mouse cursor. &lt;/p&gt;

&lt;p&gt;The same thing happens when you want to charge your chip, the charger is compact inductive (That just means it can wirelessly detect your chip while it's in your head! ) and so it wirelessly connects to your chip and charges in from the outside (cool huh).&lt;/p&gt;

&lt;p&gt;This is all really cool and revolutionary technology which, undoubtedly, has innumerable advantages, but, let's get a little bit philosophical.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does having a Chip in our brains Help Humanity
&lt;/h3&gt;

&lt;p&gt;The N1 chip will help humanity in a lot of ways picture for a moment that rippled man on the street, that deaf and dumb guy, or that blind guy who can't join WhatsApp, Instagram, Twitter or play video games simply because they can't operate a computer. To put it simply, Neuralink gives them that opportunity. It also helps healthy human beings by making it possible for us to access our technological devices remotely.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Safe is the Technology
&lt;/h3&gt;

&lt;p&gt;From the beginning of time, man has misused technology from guns, ships to planes and cars but that hasn't stopped us from using technology, some people use guns to defend themselves while some use guns to rob others, and some people use airplanes to travel but others use it to commit genocide, some people use smartphones to connect with friends and family while some people use it to plan criminal activities. &lt;/p&gt;

&lt;p&gt;Now you might be thinking &lt;em&gt;' but those are small-time crimes we are talking about the brain now'&lt;/em&gt; Now I ask you this, what is the difference between Neuralink today and the smartphone to your great grandfather? Technology has always improved and people have always abused it but it is the people and not the technology that is evil, and it is our responsibility as humans to fight against those who want to limit technology through their evil deeds don't we owe it to the future generations to do this after all who are we to stand in the path of the future?&lt;/p&gt;

</description>
      <category>bmi</category>
      <category>ai</category>
    </item>
    <item>
      <title>How to Deploy a Jupyter Notebook to Docker</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Sat, 19 Feb 2022 09:55:26 +0000</pubDate>
      <link>https://dev.to/edemgold/how-to-deploy-a-jupyter-notebook-to-docker-4glb</link>
      <guid>https://dev.to/edemgold/how-to-deploy-a-jupyter-notebook-to-docker-4glb</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.  -Albert Einstein&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this article, we will talk about what Docker is, how it works and how to deploy a Jupyter notebook to a Docker Container.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1645225872659%2FE2IQ1FlYk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1645225872659%2FE2IQ1FlYk.png" alt="docker-about.png"&gt;&lt;/a&gt;&lt;br&gt;
According to the &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker Website&lt;/a&gt;, Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. &lt;/p&gt;

&lt;p&gt;In other words, Docker is a platform that provides a container for you to run host, and run your applications in without bothering about things like platform dependence, it provides infrastructure called a container where your platforms can be held and run.&lt;/p&gt;
&lt;h2&gt;
  
  
  What makes up Docker (In a Nutshell)
&lt;/h2&gt;

&lt;p&gt;Here we will provide an overview of what docker up, if you want a comprehensive overview of how docker works, check &lt;a href="https://devopscube.com/what-is-docker/" rel="noopener noreferrer"&gt;this article&lt;/a&gt; out.&lt;/p&gt;

&lt;p&gt;The Docker Architecture is divided into three(3) sections: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker Engine(dockerd)&lt;/li&gt;
&lt;li&gt;docker-containerd (contained)&lt;/li&gt;
&lt;li&gt;docker-runc (runc)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1645224672163%2FfbpHgIS7N.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1645224672163%2FfbpHgIS7N.png" alt="docker-2.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Docker Engine(dockerd)
&lt;/h3&gt;

&lt;p&gt;Docker engine comprises the docker daemon, an API interface, and Docker CLI. Docker daemon (dockerd) runs continuously as dockerd system service. It is responsible for building the docker images.&lt;/p&gt;
&lt;h3&gt;
  
  
  Docker-containerd
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;containerd&lt;/em&gt; is another system daemon service that is responsible for downloading the docker images and running them as a container. It exposes its API to receive instructions from the dockerd service&lt;/p&gt;
&lt;h3&gt;
  
  
  Docker-runc
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;runc&lt;/em&gt; is the container runtime responsible for creating the namespaces and &lt;em&gt;cgroups&lt;/em&gt; required for a container. It then runs the container commands inside those namespaces. &lt;em&gt;runc&lt;/em&gt; runtime is implemented as per the OCI specification.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to Deploy a Colab Jupyter Notebook to a Docker Container
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1645231212919%2FQMD6LlFq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1645231212919%2FQMD6LlFq9.png" alt="docker-0.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this part, we are going to work build a simple classifier model using the &lt;a href="https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html" rel="noopener noreferrer"&gt;Iris Dataset&lt;/a&gt;, after that we will import the code from colab and finally we install and run deploy the script containing the model into a docker container.&lt;/p&gt;
&lt;h3&gt;
  
  
  Building Model
&lt;/h3&gt;

&lt;p&gt;In this section, we will build the classifier model using the sklearn's inbuilt &lt;a href="https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html" rel="noopener noreferrer"&gt;Iris Dataset&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP 1:&lt;/strong&gt;&lt;br&gt;
Create a new notebook in &lt;a href="https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5" rel="noopener noreferrer"&gt;google colab&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1645231775011%2F4g_gKU3EK.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1645231775011%2F4g_gKU3EK.png" alt="colab-opening.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP 2&lt;/strong&gt;&lt;br&gt;
Import the dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;STEP 3&lt;/strong&gt;&lt;br&gt;
Here we are going to load the iris dataset, split the data into the training set and test set, and build our classification model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iris = load_iris()
X = iris.data
y = iris.target

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we loaded the Iris dataset.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2 , random_state=4)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we used the &lt;em&gt;train_test_split&lt;/em&gt; function in sklearn to split the iris dataset into a training set and test set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;knn = KNeighborsClassifier(n_neighbors=10)
knn.fit(X,y)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we instantiated the KNeighbors Classifier model and tuned the n_neighbours hyperparameter to contain ten(10) neighbors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing and deploying to Docker
&lt;/h3&gt;

&lt;p&gt;This is the final chapter, here we are going to install the docker desktop application and write the scripts which will deploy our script to a docker container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP 1&lt;/strong&gt;&lt;br&gt;
Firstly, you download the python script containing your trained model from colab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1645232679737%2FlONGTqP48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1645232679737%2FlONGTqP48.png" alt="colab-import-code.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP 2&lt;/strong&gt;&lt;br&gt;
Now we are going to install and set up docker.&lt;br&gt;
You can install docker using this&lt;a href="https://docs.docker.com/get-started/#download-and-install-docker" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;STEP 3&lt;/strong&gt;&lt;br&gt;
Now create a directory called &lt;strong&gt;iris-classifier&lt;/strong&gt; where we are going to host our model and docker scripts. &lt;/p&gt;

&lt;p&gt;Move the python file containing the iris classification model to the iris-classification folder just created.&lt;/p&gt;

&lt;p&gt;In the same folder, create a text file called &lt;strong&gt;requirements&lt;/strong&gt;, below are the contents it will contain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sklearn==0.0
matplotlib==3.2.2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;STEP 4&lt;/strong&gt;&lt;br&gt;
Here we will create the &lt;strong&gt;Dockerfile&lt;/strong&gt;, go to your main directory and create a file called &lt;strong&gt;Dockerfile&lt;/strong&gt; without any extension.  A dockerfile is a script that is used to create a container image. Below are the items that will be contained in your Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.8

ADD requirements.txt /

RUN pip install -r /requirements.txt

ADD iris-classifier.py /

ENV PYTHONUNBUFFERED=1

CMD [ "python", "./iris-classifier.py" ]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we simply told docker what to do each time the container is run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP 5&lt;/strong&gt;&lt;br&gt;
Here we are going to create our &lt;strong&gt;Docker Compose file&lt;/strong&gt;, docker-compose files are simply configuration files that make it easy to maintain different Docker containers.&lt;/p&gt;

&lt;p&gt;In your project directory, create a file called &lt;strong&gt;docker-compose.yml&lt;/strong&gt;, below are the contents to be contained in the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3"
services:
  iris-classifier-uplink:
    # if failure  or server restarts, container will restart
    restart: always 
    container_name: iris-classifier-uplink
    image: iris-classifier-uplink
    build: 
      # build classifier image from the Dockerfile in the current directory
      context: . 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now in your directory, &lt;strong&gt;iris-classifier&lt;/strong&gt; you should have three(3) files. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker-compose.yml&lt;/li&gt;
&lt;li&gt;Dockerfile&lt;/li&gt;
&lt;li&gt;iris-classifier.py&lt;/li&gt;
&lt;li&gt;requirements.txt&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Running Docker Container
&lt;/h3&gt;

&lt;p&gt;This is the final step, here we will run our docker container using the commands stated below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose build

docker compose up -d

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the end, our Python model is now running in a docker container!&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker Website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/blog/" rel="noopener noreferrer"&gt;Docker Blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/" rel="noopener noreferrer"&gt;Docker Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devopscube.com/what-is-docker/" rel="noopener noreferrer"&gt;Deep Dive into how Docker works&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  EndNote
&lt;/h2&gt;

&lt;p&gt;Jupyter Notebooks are really good places for building models and you can also use them as back ends for applications, unfortunately, they don't run forever. &lt;/p&gt;

&lt;p&gt;Docker helps you fix that by re-running your jupyter notebook when it fails and this makes it a tool worth knowing.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>docker</category>
      <category>jupyter</category>
    </item>
    <item>
      <title>How Blockchain Works</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Mon, 07 Feb 2022 03:16:46 +0000</pubDate>
      <link>https://dev.to/edemgold/how-blockchain-works-5g80</link>
      <guid>https://dev.to/edemgold/how-blockchain-works-5g80</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“If the only tool you have is a hammer, you tend to see every problem as a nail.” -Abraham Maslow&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I have recently been interested in Blockchain technology, and so I decided to write an article that attempts to aptly provide an accurate overview of Blockchain as a technology. I hope I achieved this, I'll leave that to you to decide.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is A Blockchain?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3qOyZSJa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1644198891851/8UbTD9A38.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3qOyZSJa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1644198891851/8UbTD9A38.jpeg" alt="pic-1.jpg" width="880" height="528"&gt;&lt;/a&gt;&lt;br&gt;
 A blockchain is a growing list of records called blocks that are linked together using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data, and these help in the validation of transfers on a blockchain.&lt;/p&gt;

&lt;p&gt;Blockchains are essentially databases but databases that cannot be modified once data has been recorded on them, data in any given block can't be altered ex post facto without altering all subsequent blocks.&lt;/p&gt;

&lt;p&gt;In other words, to alter data contained in a block, you have to alter the data contained in past blocks. A blockchain is a shared immutable ledger that provides an immediate, safe and transparent exchange of encrypted data simultaneously to multiple parties as they initiate and complete transactions.&lt;/p&gt;

&lt;p&gt;Now let us understand the core concepts that make up a Blockchain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Peer-To-Peer  Computing (P2P)
&lt;/h3&gt;

&lt;p&gt;Peer-to-Peer(P2P) computing/networking is a distributed application architecture that partitions tasks or workloads between peers. Peers make up a portion of their resources such as processing power, storage, network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. &lt;/p&gt;

&lt;p&gt;In simple words, Peer-to-Peer Computing is simply a network where the network isn't hosted on a central server but rather hosted jointly by members of the network. &lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed Ledgers
&lt;/h3&gt;

&lt;p&gt;A distributed ledger is a &lt;a href="https://en.wikipedia.org/wiki/Wikipedia:Consensus"&gt;consensus&lt;/a&gt; of replicated, shared, and synchronized data.&lt;/p&gt;

&lt;p&gt;In simple words, a distributed ledger is a database but unlike centralized databases, there is no central administrator /control.&lt;/p&gt;

&lt;h3&gt;
  
  
  Communication Protocol
&lt;/h3&gt;

&lt;p&gt;A Communication Protocol is a system of rules that allows two or more entities of a communication system to transmit information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Secure Design
&lt;/h3&gt;

&lt;p&gt;This is a software engineering concept where products/systems have been designed to be foundationally secure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Byzantine Fault Tolerance
&lt;/h3&gt;

&lt;p&gt;To understand the term &lt;strong&gt;Byzantine Fault Tolerance&lt;/strong&gt; you must understand the [Byzantine Fault](&lt;a href="https://en.wikipedia.org/wiki/Byzantine_fault#:%7E:text=A%20Byzantine%20fault%20(also%20Byzantine,imperfect%20information%20on%20whether%20a)(commonly"&gt;https://en.wikipedia.org/wiki/Byzantine_fault#:~:text=A%20Byzantine%20fault%20(also%20Byzantine,imperfect%20information%20on%20whether%20a)(commonly&lt;/a&gt; known as Byzantine Generals Problem).&lt;/p&gt;

&lt;p&gt;The Byzantine Fault is a condition specific to distributed systems where components may fail and there is imperfect information on whether a component has failed or why it has failed.&lt;/p&gt;

&lt;p&gt;Now, the Byzantine Fault Tolerance is the ability of a system to efficiently withstand the conditions stated above.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does all this work in a Blockchain?
&lt;/h2&gt;

&lt;p&gt;Blockchains are typically managed by a &lt;strong&gt;peer-to-peer&lt;/strong&gt; network for use as a publicly &lt;strong&gt;distributed ledger&lt;/strong&gt;.  Blockchains may be considered &lt;strong&gt;secure by design&lt;/strong&gt; and exemplify a distributed computer system with high *&lt;em&gt;Byzantine fault tolerance *&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are blockchains Popular?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wvwLDFKS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1644200465574/7oJHRO5VW.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wvwLDFKS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1644200465574/7oJHRO5VW.jpeg" alt="cover.jpg" width="612" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  It provides Trust
&lt;/h3&gt;

&lt;p&gt;With blockchain, as a member of a members-only network, you can rest assured that you are receiving accurate and timely data and that your confidential blockchain records will be shared only with network members to whom you have specifically granted access.&lt;/p&gt;

&lt;h3&gt;
  
  
  It provides Security
&lt;/h3&gt;

&lt;p&gt;Consensus on data accuracy is required from all network members, and all validated transactions are immutable because they are recorded permanently. No one, not even a system administrator, can delete a transaction.&lt;/p&gt;

&lt;h3&gt;
  
  
  It promotes efficiency in a Network
&lt;/h3&gt;

&lt;p&gt;With a distributed ledger that is shared among members of a network, time-wasting record reconciliations are eliminated. And to speed transactions, a set of rules — called a smart contract — can be stored on the blockchain and executed automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Blockchains
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Public Blockchain
&lt;/h3&gt;

&lt;p&gt;A public blockchain is one that anyone can join and participate in, such as Bitcoin. Drawbacks might include substantial computational power required, little or no privacy for transactions, and weak security. These are important considerations for enterprise use cases of blockchain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Private Blockchain
&lt;/h3&gt;

&lt;p&gt;A private blockchain network, similar to a public blockchain network, is a decentralized peer-to-peer network. However, one organization governs the network, controlling who is allowed to participate, executing a consensus protocol, and maintain the shared ledger. Depending on the use case, this can significantly boost trust and confidence between participants. A private blockchain can be run behind a corporate firewall and even be hosted on-premises.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hybrid Blockchains
&lt;/h3&gt;

&lt;p&gt;A hybrid blockchain has a combination of centralized and decentralized features.[66] The exact workings of the chain can vary based on which portions of centralization decentralization are used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Listed above are just the primary types of blockchains, usually organizations adopt variations of the above to suit their specific needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Uses of Blockchains
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cryptocurrencies
&lt;/h3&gt;

&lt;p&gt;This is the most popularly known use case of blockchains. Cryptocurrencies such as &lt;a href="https://ethereum.org/"&gt;Ethereum&lt;/a&gt;, &lt;a href="https://bitcoin.org/en/"&gt;Bitcoin&lt;/a&gt;, &lt;a href="https://www.investopedia.com/terms/b/binance-coin-bnb.asp"&gt;BNB&lt;/a&gt;, &lt;a href="https://www.investopedia.com/terms/d/dogecoin.asp"&gt;DOGE&lt;/a&gt;, and a host of others.&lt;/p&gt;

&lt;p&gt;Cryptocurrencies are built on top of blockchains and their features are what provide Cryptocurrencies with their most attractive features such as ** Decentralization**, etc&lt;/p&gt;

&lt;h3&gt;
  
  
  Smart contracts
&lt;/h3&gt;

&lt;p&gt;smart contracts are proposed contracts that can be partially or fully executed or enforced without human interaction. One of the main objectives of a smart contract is automated escrow. A key feature of smart contracts is that they do not need a trusted third party (such as a trustee) to act as an intermediary between contracting entities -the blockchain network executes the contract on its own. This may reduce friction between entities when transferring value and could subsequently open the door to a higher level of transaction automation. &lt;/p&gt;

&lt;h3&gt;
  
  
  Games
&lt;/h3&gt;

&lt;p&gt;Blockchain technology, such as cryptocurrencies and non-fungible tokens (NFTs), has been used in video games for monetization. Many live-service games offer in-game customization options, such as character skins or other in-game items, which the players can earn and trade with other players using in-game currency. Some games also allow for trading of virtual items using real-world currency, but this may be illegal in some countries where video games are seen as akin to gambling and have led to gray market issues such as skin gambling, and thus publishers typically have shied away from allowing players to earn real-world funds from games. Blockchain games typically allow players to trade these in-game items for cryptocurrency, which can then be exchanged for money.&lt;/p&gt;

&lt;h3&gt;
  
  
  Financial Services
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://ethereum.org/"&gt;Ethereum blockchain&lt;/a&gt; enables more open, inclusive, and secure business networks, shared operating models, more efficient processes, reduced costs, and new products and services in banking and finance. It enables digital securities to be issued within shorter periods, at lower unit costs, with greater levels of customization. Digital financial instruments may thus be tailored to investor demands, expanding the market for investors, decreasing costs for issuers, and reducing counterparty risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Blockchain as a technology is still very young and it has a lot of flaws but with the rise of web 3, it is bound to become more efficient. I strongly believe in the potential of blockchains to impact the world as we know it. &lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>web3</category>
      <category>crypto</category>
    </item>
    <item>
      <title>How Images are turned Into Arrays</title>
      <dc:creator>EdemGold</dc:creator>
      <pubDate>Tue, 01 Feb 2022 00:59:12 +0000</pubDate>
      <link>https://dev.to/edemgold/how-images-are-turned-into-arrays-3l22</link>
      <guid>https://dev.to/edemgold/how-images-are-turned-into-arrays-3l22</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“The world of the future will be an even more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.”&lt;br&gt;
― Norbert Wiener&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We've all probably used our phone's Face Recognition feature to unlock it. In this article, we are going to understand how our phones turn our images into arrays which can then be processed by computers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zzrLCVOv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643673528482/PD91z5cpT.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zzrLCVOv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643673528482/PD91z5cpT.jpeg" alt="Comp-Vision.jpg" width="348" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Computer Vision?
&lt;/h1&gt;

&lt;p&gt;To get a clearer picture of how Images are turned into arrays in Machine Learning let us understand what Computer Vision is.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://en.wikipedia.org/wiki/Computer_vision"&gt;Wikipedia&lt;/a&gt;, Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. &lt;/p&gt;

&lt;p&gt;In simple words, It is a field of AI which deals with how computers see.&lt;/p&gt;

&lt;h1&gt;
  
  
  How Images are turned Into Arrays
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MHSDQLFx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643676051031/ho6u5i_DC.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MHSDQLFx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643676051031/ho6u5i_DC.png" alt="Cover.png" width="880" height="388"&gt;&lt;/a&gt;&lt;br&gt;
To begin with, a camera records the amount of light reflected in it from the surfaces of objects(data) in a 3D scene/environment.&lt;/p&gt;

&lt;p&gt;This data is then transmitted to electrical signals which vary proportionately to the intensity of reflected light.  A convertor then changes(converts) the analog electrical signal into digital information for the computer by sampling the signal at regular intervals and translating each electrical signal into a number representing a position on a range of brightness/intensity on a &lt;a href="https://en.wikipedia.org/wiki/Grayscale"&gt;GrayScale&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The numbers then form a two-dimensional grid called a gray level array, each value in the array constitutes a pixel (picture element) of the digitized image. Computer Vision systems commonly use grayscale with values ranging from &lt;strong&gt;Zero-255&lt;/strong&gt;,  Zero represents the darkest areas of the image while 255 represents the lightest parts of the image. &lt;/p&gt;

&lt;p&gt;Color images make use of separate measurements each for the amount of *&lt;em&gt;Red, Green, and Blue *&lt;/em&gt; reflected from the image/scene. The measurements are then translated into three separate arrays of brightness values, each varying from Zero to 255.  This is why Color Images take more time to process than black and white images.&lt;/p&gt;

&lt;h1&gt;
  
  
  How Images are turned Into Arrays(Broken Down Version)
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MHSDQLFx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643676051031/ho6u5i_DC.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MHSDQLFx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1643676051031/ho6u5i_DC.png" alt="Cover.png" width="880" height="388"&gt;&lt;/a&gt;&lt;br&gt;
When a camera records a live scene for a computer Vision neural network, it records the amount of light reflected from the surface of the object in a 3d scene/environment.&lt;/p&gt;

&lt;p&gt;This reflection data is then transmitted as electrical signals which increase or decrease based on the intensity of the light reflected from the body of the object.&lt;/p&gt;

&lt;p&gt;A convertor then changes the electrical signals into digital information for the computer by continuously going through the entire surface of the object in question, each electrical signal is then transmitted into a number representing a position of intensity(either bright or dark), the number in question is graded according to something called the grayscale. &lt;/p&gt;

&lt;p&gt;The grayscale is mostly made up of numbers from *&lt;em&gt;zero to 255 *&lt;/em&gt;, it provides number ranges based on the intensity of light reflection gotten from the object with zero as the darkest point and 255 being the lightest point. &lt;/p&gt;

&lt;p&gt;These numbers then form a two-dimensional grid array with each digit in the array representing a pixel(Picture Element) in the Object/Picture.&lt;/p&gt;

&lt;p&gt;In simple words, the two-dimensional array represents the image and the numbers represent the pixel.&lt;/p&gt;

&lt;p&gt;Colored pictures require the creation of 3 different arrays for brightness values each containing digital info on &lt;strong&gt;Red, Green, and Blue&lt;/strong&gt; wavelengths gotten from the object reflection. This is why Colored Pictures/Objects take way more time to process than Black and White Images.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>computervision</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
