<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ramaswamy-Arjun</title>
    <description>The latest articles on DEV Community by Ramaswamy-Arjun (@ramaswamyarjun).</description>
    <link>https://dev.to/ramaswamyarjun</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ramaswamyarjun"/>
    <language>en</language>
    <item>
      <title>Beyond the Scoreboard: Decoding the Data Evolution in Sport</title>
      <dc:creator>Ramaswamy-Arjun</dc:creator>
      <pubDate>Tue, 28 Nov 2023 16:16:57 +0000</pubDate>
      <link>https://dev.to/ramaswamyarjun/beyond-the-scoreboard-decoding-the-data-evolution-in-sport-4cpg</link>
      <guid>https://dev.to/ramaswamyarjun/beyond-the-scoreboard-decoding-the-data-evolution-in-sport-4cpg</guid>
      <description>&lt;p&gt;I am Arjun Ramaswamy and I am a huge sports enthusiast, so much so that most parts of my vacation life revolve around sports commentary, post-match analysis, player milestones and to a major extent bowing down to the excellence of few individuals on the pitch. I try to utilize as many modes of consuming numbers from the world of sports. I love hearing expert opinions and tactical analysis which believe me is growing out to be an unhealthy obsession. But I love it!&lt;br&gt;
I am the person who social media handles target when they post what has popularly been termed an “ESPN Stat”. Those statistics really don’t change the game. I well and truly believe that even the players don’t think of the game with keeping the ESPN stat in mind. One of my very favourite ESPN stats goes something like this….&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl8vmdd8g0frnly0b946.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl8vmdd8g0frnly0b946.png" alt="Cherry Picked Statline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Believe me in situations like these the comments sections becomes the funniest place to be in. But not straying too much off topic. We have had a very long journey to reach to a point where data like the one presented above finds it’s day in the limelight. &lt;br&gt;
Stats provide an objective overview of any sport and increasingly it is something adopted by almost everyone in this fraternity for the undeniable advantages that come with it. It is now common place to find a data analyst in a moder day team. Everyone wants a piece of the sweet data pie to just go one step beyond their competition.&lt;br&gt;
Since I am an enthusiast that is very curious regarding the state of sport and how it will grow, I want to take you on a journey what brought us here. How we can expect this space to change and what really does investing in data analysis achieve. &lt;/p&gt;

&lt;h2&gt;
  
  
  Moneyball
&lt;/h2&gt;

&lt;p&gt;If you have not been living under the rock for this whole time, you are ought to have come across a book called “Moneyball: The Art of Winning an Unfair game”. This book written by Micheal Lewis in the year 2003 presents an overview on the Oakland Athletics Baseball team and their general manager Billie Beane. &lt;br&gt;
In its heart the book looks at how manager Billie Beane and his assistant manager Paul DePodesta use data to identify undervalued players because of their small budget. This coupled with the objective nature of data allowed the A’s to severely reduce risks associated with acquiring undervalued talent, which pushed them towards the roads to success.&lt;br&gt;
The budgets were small and this was the reason Beane had taken such a leftfield method for the time to assemble his team in the most objective way possible. But little did he know that his method will become the norm in years to come.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq7qde10wzdj4kyojdyw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq7qde10wzdj4kyojdyw.jpeg" alt="2003 Oakland A's"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Moneyball in its truest sense is not about baseball, rather an excellent case study in the advantages of Data-Driven Decision making, challenging conventional wisdom and achieving success in the face of constraints. &lt;br&gt;
In my eyes keeping aside the social impact that both the book and the movie has had with their wide-ranging success, the most important change that Moneyball has had in the sports industry is the wide spread use of Sabermetrics. Tho it is a specifically baseball thing, the term has popularly been adopted in discussions across sports.&lt;br&gt;
At the heart of all of this is the main reason to Moneyball’s success, Player Performance. Talking about which will conveniently lead us to our next point of focus.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objectivity of Performance Analysis
&lt;/h2&gt;

&lt;p&gt;Objectivity: Freedom from Bias&lt;br&gt;
Data doesn’t lie, it might not always paint the entire picture but it never lies. I enjoy sport for experiencing the beauty of a Virat Kohli Cover Drive, Kyrie Irwing being an absolute menace breaking any and every ankle in front of him, Messi toying with defenders while the ball seemingly is stuck to his feet. Somethings can never be enjoyed in forms of data. They are not objective traits but rather individual brilliance of some very gifted athletes. But they are not to be confused with performance and efficiency. Not everyone is built the same but sport is a great leveller. For every Ronaldinho there is Vincent Kompany, but if you are on a look out for building a team which guarantees performance, often times you might want to leave the spectacle behind and look for certain objective traits. &lt;br&gt;
Performance analysis is everything that the name suggests and then some. Let’s take cricket as the sport in question and look for parameters that a coach might look to build his/her championship winning roster. Averages play a big role in how we perceive a player and their abilities. They provide a good sense of an experienced players quality over a long period of time. In batting you want it to be as high as possible and in bowling it is better to have a sub 25 average. Famously in the realm of Cricket, Sir Don Bradman has a career batting average of 99.94 just missing on a perfect hundred by mere 4 runs on his last ever outing as a professional cricketer.&lt;br&gt;
Reputation has and will always be a factor during selection of players to build your perfect roster and teams today are actively trying to work on striking the correct balance between objectivity and experience. Let us take an example:&lt;br&gt;
In the 70’s Dutch and Barcelona legend Johan Cryuff pioneered the idea of “Total Football”. In his vision for the beautiful game, players would not stick to playing only one position which in time gave birth to players who were multi-faceted. It was common to see players good at playing multiple positions on the football pitch. In contrast today hardly do we see top teams employing players who specialise in more than one position. Roles are defined in a much different manner than back then and many players are unidimensional. On face value that would seem like the sport is moving backwards but this is a big change Data-Driven Decision making has caused since it was implemented by majority of sports teams. &lt;br&gt;
Managers in today’s game like to have players that are great at their positions, gone are the days of every foot behind the ball. Teams play a way more dynamic game with specialist players sprinkled all across the field. Team tactics take into consideration a lot of data about the opposition and player performance. &lt;br&gt;
Look at how we track performance itself, next time you watch your favourite football team practicing, notice the players wearing athletic vests. While being a butt of a lot of childish and light-hearted internet gag they are the corner stones to performance tracking. These vests are skin tight allowing coaches to slip in a GPS tracker on a player’s body. Every metric of a player is measured, their distance covered, heart rate, endurance metrics like periods of high intensity, everything. &lt;br&gt;
We have also seen major improvements in player longevity. Their training and recovery programs is all at an all-time best with the inclusion of data analysis. Hardly are players over used and good teams rely less on an individual’s brilliance and more on holistic contribution. We are able to predict how implementing newer regimes on players changes the course of a player. &lt;br&gt;
Searching and nurturing talent is also something that with the onset of data analysis we are able to perform incredibly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scouting and Grassroots Development
&lt;/h2&gt;

&lt;p&gt;In my eyes the aspect of the game that has changed the most in the past few years is Scouting and grassroots development. In the past, talent could pull you to get some opportunities. A lot of players can pass the eye test but when it comes to contribution they might suffer. Newer players are coached differently, they are prepared to play at the bigger stage better because we can today objectively point out what it takes to be successful in sport.&lt;br&gt;
Grassroot level programs in many sports today split positions and roles of their young trainees using body analysis. Whether the trainee is tall, broad, lean, etc. Coaches have found that certain body types perform better at certain positions than others. A very visible example is that of a centre in basketball. They have traditionally been the biggest people on the court. They have better reach for the rim, can easily outpower anyone on the court and don’t move as much as the others which in turn is for their own advantage since moving at that height with some size is not an easy task.&lt;br&gt;&lt;br&gt;
This also actively affects established players. Think of yourself as a journeyman pro for a sec. You want to reinvent yourself because not everyone is gifted the same. Today the first step to take is getting on the whiteboard and start studying data. Forget about what you are already good at and start searching for skills you can offer. Scouts look for specific traits to build rosters, they seldom look for complete players because it is really tough to find someone who is perfect. Only once in a lifetime do we see a Micheal Jordan who could defend and score like a dream, most of the times we find players who are great at a few skills, like Kyrie Irwin (You might have clocked by now that I really enjoy Kyrie), immense scoring and dribbling ability and very ordinary defending and horrible physicality (please stay healthy dude!!!). In short develop useful skills rather than becoming a complete package because it really gives you a better chance to be in a championship level team. Talking of championship level teams, lets look at one such team that uses VR of all things as one aspect of their training.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tampa Bay Buccaneers and Their unconventional VR training
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;“We are excited to be coming in on the front end of this new wave of technology that is designed to supplement the on-field and classroom work that our quarterbacks are already doing,”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The above statement is from the General Manger Jason Licht on adopting VR training, making the Buccaneers the first NFL team to make their quarterbacks use VR simulations of in-game situations to train. This method allows coaches to make perfect scenarios for every single opposition and even-if the technology won’t be perfect atleast it gives a better idea than just pen and paper preparation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Obviously, there is no real substitute for being on the field when it comes to getting our players ready for game action. However, this virtual reality technology allows us to enhance the learning experience for our quarterbacks without requiring them to put in additional time on the practice fields.” &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Also said GM Jason Licht.&lt;br&gt;
The Buccaneers are not alone, Baseball is another sport where VR has been incorporated to some extent in their game prep. The "W.I.N. Series," a virtual reality interactive player development software and simulation solution from EON Sports, has been introduced by the Yokohama DeNA Baystars. The Baystars are the first baseball team in Japan to use this cutting-edge technology, and they join a growing list of professional teams who have done so. Since the 2017 campaign, the team has been training with the system.&lt;br&gt;
These just mark a few examples where data has transformed training in modern professional sport.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why does it matter?
&lt;/h2&gt;

&lt;p&gt;I started this blabbering fest by pointing out an ESPN stat, and to reach that point of specificity one needs to take in account immense amount of data. Being honest players rarely care about such cherrypicked stats. But their existence is to fulfil someone else, the fan. The spectator, the person who is the stakeholder of sport by investing their time, often times money and a lot of heart into the sport. In the era of social media, the space of debating the latest and the greatest, produces some incredible analysis on players. This is facilitated by statistics and data. I  am one, I love to see my favourite players stats, it helps me win online debates, produce compelling evidence to prove my point and ending my day with a sense of false superiority proving another netizen wrong. &lt;br&gt;
While the last statement seems futile, data also facilitates how we view sport now. With working hours increasing, a lot of sports enthusiasts might rarely get their fix of watching the entire game. Ever so often, I as a college student just look at the post-match stat and analyse it to death. Even while watching my subconscious forces me to look at my mobile phone to open say CricBuzz to check how many extras have India given away in a certain World Cup final, or how many turnovers has Eric Dier inflicted today. All this to just fulfil my insatiable need for objective analysis. I am not alone in this at all and people more and more have started giving stats a very important place in their sport consuming experience.&lt;br&gt;
Major debates happen on new scoreboards and how they display information in the most efficient and least distracting manner while also being pleasant to consume all the data they throw at us. Take F1 for example, it is probably the first sport to give data analysis its fair due. It is only fair that the sport that requires it the most has the most complex yet amazingly pleasant leaderboards and players stat.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0ivv68dhydxhwnbeqwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0ivv68dhydxhwnbeqwa.png" alt="Modern F1 ScoreBoard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data is changing everything that we associate with sport. Well, that previous statement is flawed because data has changed the very fabric of our society as we know it today. Everything that we do in today’s world involves some form of data to be recorded in some server somewhere. But in sport it is visible to the naked eye. It probably is one of the only places where we as humans would give consent to record data.&lt;br&gt;
Sport is changing, becoming a professional at it has changed, the roles in the team have increased while the stakeholders also get their fair share of the pie with incredible viewing experiences. But one should never forget that growing amount of data analytics and data-driven decision doesn’t change the fact that in sport there forever remains mystery and intrigue regarding the events about to follow. At the end of the day the players are human and experiencing superhuman feats from your favourite player is what makes you fall in love with it.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>datadrivendecisonmaking</category>
      <category>learning</category>
      <category>sport</category>
    </item>
    <item>
      <title>Intricacies and understanding of the economy using data science</title>
      <dc:creator>Ramaswamy-Arjun</dc:creator>
      <pubDate>Sat, 02 Sep 2023 16:27:11 +0000</pubDate>
      <link>https://dev.to/trcvitc/intricacies-and-understanding-of-the-economy-using-data-science-198m</link>
      <guid>https://dev.to/trcvitc/intricacies-and-understanding-of-the-economy-using-data-science-198m</guid>
      <description>&lt;p&gt;Big data is a new age concept that has emerged as a result of the massive expansion in information collected over the past two decades due to the rapid advancements in information and communications technologies. According to estimates, sensors, mobile devices, online transactions, and social networks generate almost three billion bytes of data per day, with 90% of the world's data having been produced in the last three years alone.&lt;/p&gt;

&lt;p&gt;As a result of the difficulties associated with storing, organizing, and comprehending such a vast amount of data, new technologies in the fields of statistics, machine learning, and data mining have been developed. These technologies also interact with fields of engineering and artificial intelligence (AI), among others.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hQXbmuY1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zrq4wb3yp6063a8nzrgs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hQXbmuY1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zrq4wb3yp6063a8nzrgs.jpg" alt="Image description" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This massive endeavor resulted in the development of the new multidisciplinary subject known as "Data Science," whose concepts and methods aim to automatically extract potentially usable knowledge and information from the data.&lt;/p&gt;

&lt;p&gt;Now looking specifically towards it's applications in economics. It is crucial for all governments, international organizations, and central banks to keep an eye on the economy's present and future conditions. To create effective policies that can promote economic growth and protect societal well-being, policymakers need readily accessible macroeconomic information.&lt;/p&gt;

&lt;p&gt;Key economic statistics, on which they base their decisions, are created seldom, released with long delays—the European Union's Gross Domestic Product (GDP) is released after about 45 days—and frequently undergo significant adjustments. Economic nowcasting and forecasting are in fact exceedingly difficult undertakings because economists can only roughly estimate the current, future, and even very recent past economic conditions with such a little amount of information. &lt;/p&gt;

&lt;p&gt;In a global interconnected world, shocks and changes originating in one economy move quickly to other economies affecting productivity levels, job creation, and welfare in different geographic areas. In sum, policy-makers are confronted with a twofold problem: timeliness in the evaluation of the economy as well as prompt impact assessment of external shocks.&lt;/p&gt;

&lt;p&gt;In this blog post let us explore a few ways we are able to apply these new age concepts in something so essential for our societies functioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Challenges
&lt;/h2&gt;

&lt;p&gt;The number of devices that provide information about human and economic activities has significantly expanded in recent years as a result of technology advancements (e.g., sensors, monitoring, IoT devices, social networks). These new data sources offer a vast, regular, and varied amount of data, allowing for precise and timely estimates of the economy's status. Such data are large and diverse, making it difficult to collect and analyze them. However, if correctly utilized, these new data sources might offer more predictive potential than the conventional regressors employed in economic and financial analysis in the past.&lt;/p&gt;

&lt;p&gt;Since in this instant the sheer amount of data is large and varied, analysing them needs machines that have great computing power. In recent years we have seen unimaginable amount of increase in computing power.&lt;/p&gt;

&lt;p&gt;For instance, cloud computing systems and Graphical Processing Units (GPUs) have recently grown more accessible and popular. It is possible to program GPUs' highly data-parallel design utilizing frameworks like CUDA and OpenCL.&lt;/p&gt;

&lt;p&gt;They are made up of several cores, each of which has several functional components. Each thread of execution is processed by one or more of these functional units, also referred to as thread processors. As they share a common control unit, all thread processors in a GPU core execute the same instructions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0ogmwyOK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ihxstb8g7exrddgr8n8v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0ogmwyOK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ihxstb8g7exrddgr8n8v.png" alt="Archietecture of a Graphical Processing Unit" width="800" height="839"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another key requirement for the successful utilization of new data sources for economic and financial analysis is accessibility. To protect sensitive information, it is frequently restricted in practice. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data stewardship&lt;/strong&gt;, a concept that includes properly gathering, annotating, and archiving information as well as providing "long-term care" for data that may be used in future applications and combined with new data, is frequently used to describe striking a balance between accessibility and protection.&lt;/p&gt;

&lt;p&gt;Individual-level credit performance data is an obvious example of sensitive information that might be highly helpful in economic and financial analysis but whose access is frequently limited for data protection reasons. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Financial institutions could gain from improved credit risk models that more accurately identify risky borrowers and reduce the potential losses associated with a default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consumers could have easier access to credit thanks to the effective allocation of resources to dependable borrowers and governments and central banks could monitor the state of their economies by check in real-time. Online data sets containing individual-level data that has been anonymised abound.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Data Analytics Methods
&lt;/h2&gt;

&lt;p&gt;In order to manage and maintain massive data structures, such as raw logs of user actions, natural language from conversations, photos, videos, and sensor data, traditional nowcasting and forecasting economic models are not dynamically scalable. New tool sets are needed in order to handle this large volume of data in its naturally complex high-dimensional formats for economic analysis. In actuality, when data dimensions are large or expanding quickly, traditional methodologies do not scale effectively. &lt;/p&gt;

&lt;p&gt;Simple activities like data visualization, model fitting, and performance evaluation become challenging. In a big data context, traditional hypothesis testing that sought to determine the significance of a variable in a model (T-test) or to choose one model over several alternatives (F-test) must be utilized with care. &lt;/p&gt;

&lt;p&gt;Social scientists can use data science approaches in these situations, and in recent years, efforts to have those applications acknowledged in the economic modeling community have expanded tremendously. The development of interpretable models and the opening up of black-box machine learning solutions constitute a focal point. &lt;/p&gt;

&lt;p&gt;In fact, when data science algorithms prove to be rarely understandable despite being easily scalable and extremely performant, they are useless for policy-making. To achieve the level of model performance, interpretability, and automation required by the stakeholders, good data science applied to economics and finance requires a balance across these dimensions and often entails a combination of domain expertise and analysis tools. So now let us take a look at methods that help us achieve these feats.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep Learning Machine
&lt;/h2&gt;

&lt;p&gt;While Support Vector Machines, Decision Trees, Random Forests, and Gradient Boosting have been around for a while, they have shown a great potential to tackle a variety of data mining (e.g., classification, regression) problems involving businesses, governments, and people. &lt;/p&gt;

&lt;p&gt;Deep learning is currently the technology that has had the most success with both researchers and practitioners. A set of machine learning techniques based on learning data representations (capturing highly nonlinear correlations of low level unstructured input data to construct high level concepts) are known as deep learning, which is a general-purpose machine learning technology.&lt;/p&gt;

&lt;p&gt;Deep learning approaches made a real breakthrough in the performance of several tasks in the various domains in which traditional machine learning methods were struggling, such as speech recognition, machine translation, and computer vision (object recognition). &lt;/p&gt;

&lt;p&gt;The advantage of deep learning algorithms is their capability to analyze very complex data, such as images, videos, text, and other unstructured data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eje2yoXk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/45pp6efhktnma98mun2l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eje2yoXk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/45pp6efhktnma98mun2l.jpg" alt="Difference b/w AI,ML and DL" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Artificial neural networks (ANNs)&lt;/strong&gt; with deep structures, such as Deep Restricted Boltzmann Machines, Deep Belief Networks, and Deep Convolutional Neural Networks, are examples of deep hierarchical models. ANN are computational tools that can be seen as applying the framework of how the brain works to build mathematical models. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using input data, neural networks estimate functions of any complexity. An input vector to output vector mapping is represented using supervised neural networks. Instead, unsupervised neural networks are utilized to categorize the input without already knowing which classes are involved. &lt;/p&gt;

&lt;p&gt;Deep learning has already been used in the field of finance, for example, to predict and analyze the stock market. The Dilated Convolutional Neural Network, whose core architecture derives from DeepMind's WaveNet project, is another successful ANN method for financial time-series forecasting. the work on time series-to-image encoding and deep learning for financial forecasting utilizes a group of convolutional neural networks that have been trained on pictures of Gramian Angular Fields made from time series related to the Standard &amp;amp; Poor's 500 Future index with the goal of predicting the direction of the US market in the future.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reinforcement learning&lt;/strong&gt;, which is based on a paradigm of learning via trial and error, purely from rewards or penalties, has gained prominence in recent years alongside deep learning. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was effectively used in ground-breaking inventions like Deep Mind's AlphaGo system, which defeated the best human player to win the Go game. It can also be used in the economics field, for example, to trade financial futures or to dynamically optimize portfolios. &lt;/p&gt;

&lt;p&gt;These cutting-edge machine learning methods can be used to understand and relate data from several economic sources and find undiscovered correlations that would go undetected if only one source of data were taken into account. For instance, merging information from text and visual sources, such as satellite imagery and social media, can enhance economic forecasts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Semenatic Web Technologies
&lt;/h2&gt;

&lt;p&gt;Textual data is considered to be part of the so-called unstructured data from the perspectives of data content processing and mining. &lt;/p&gt;

&lt;p&gt;Learning from this kind of complicated data can provide descriptive patterns in the data that are more succinct, semantically rich (Semantic: refers to the study of the meaning of data and how it can be processed, analyzed, or understood by machines or algorithms), and better reflect their underlying characteristics. Natural Language Processing (NLP) and information retrieval technologies from the Semantic Web have been developed to make it simple to retrieve a plethora of textual data. &lt;/p&gt;

&lt;p&gt;A system called the Semantic Web, sometimes known as "Web 3.0," enables robots to "understand" and reply to complicated human requests based on their meaning. Such a "understanding" necessitates semantically structured knowledge sources. &lt;/p&gt;

&lt;p&gt;By providing a formal description of concepts, terms, and relationships within a given knowledge domain and by using Uniform Resource Identifiers (URIs), Resource Description Framework (RDF), and Web Ontology Language (OWL), whose standards are maintained by the W3C. &lt;strong&gt;Linked Open Data (LOD)&lt;/strong&gt; has gained significant momentum over the past years as a best practice for promoting the sharing and publication of structured data on the Semantic Web.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TCcDkrqj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yghgekpmphl3ocs7b414.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TCcDkrqj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yghgekpmphl3ocs7b414.png" alt="Evolution of Linked Open data(LOD)" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LOD makes it possible to use data from several areas for publications, statistics, analysis, and mapping. By connecting this knowledge, associations and interrelations can be deduced, and fresh conclusions can be reached. &lt;/p&gt;

&lt;p&gt;Since more and more data sources are being published as semantic data, RDF/OWL enables the production of triples about anything on the Semantic Web. &lt;/p&gt;

&lt;p&gt;The decentralized data space of all the triples is expanding at an astounding rate. However, the Semantic Web's growing complexity is not solely a function of its size. The Semantic Web has become a complicated, large system due to its distributed and dynamic nature, coherence problems across data sources, and reasoning-based interaction across the data sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The application of data science to economic and financial modeling has been covered in this blog post. The main big data management infrastructures and data analytics methods for prediction, interpretation, mining, and knowledge discovery activities have been discussed, along with challenges including economic data handling, amount, and protection. We outlined a few typical big data issues with economic modeling and pertinent data science techniques.&lt;/p&gt;

&lt;p&gt;The development of data science methodologies that enable closer collaboration between humans and machines in order to produce better economic and financial models has an obvious need and great potential. In order to improve models and forecasting quality, these technologies can handle, analyze, and exploit the collection of extremely varied, interconnected, and complex data that already exists in the economic universe with a guarantee on the veracity of information, a focus on producing actionable advice, and an improvement in the interactivity of data processing and analytics.&lt;/p&gt;

&lt;p&gt;If you came this far, Thanks for reading the whole blog through, hopefully you have employed this time in learning newer concepts. I would love to read further and post about topics in detail if you people would like to read my interpretations of them. Thank you again and this is me Arjun Ramaswamy sigining out.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>machinelearning</category>
      <category>learning</category>
      <category>bigdata</category>
    </item>
  </channel>
</rss>
