<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: rusalka013</title>
    <description>The latest articles on DEV Community by rusalka013 (@rusalka013).</description>
    <link>https://dev.to/rusalka013</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rusalka013"/>
    <language>en</language>
    <item>
      <title>Flow Control in Python</title>
      <dc:creator>rusalka013</dc:creator>
      <pubDate>Mon, 11 Jul 2022 23:12:47 +0000</pubDate>
      <link>https://dev.to/rusalka013/flow-control-in-python-545f</link>
      <guid>https://dev.to/rusalka013/flow-control-in-python-545f</guid>
      <description>&lt;p&gt;Python as most of other languages supports imperative programming style where statements are executed in order defined by the coder. Examples of imperative programming include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Atomic statements &lt;/li&gt;
&lt;li&gt;Control structures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Atomic statements&lt;/strong&gt; are simple single line statements with no deeper structures. They are typically performed with input and output functions. A good way to conceptually comprehend atomic statements is to think about them as a communication between a user and a computer. A build-in function &lt;code&gt;print()&lt;/code&gt; serves as a great example of outputting variable values. The &lt;code&gt;print()&lt;/code&gt; is also used in debugging when a code block is broken down into smaller pieces and the output is checked with print function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;In:  name = 'Harry'
     print(name)
Out: Harry
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;input&lt;/code&gt; function asks for a user input. This function returns string as an output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;In:  input('What is your favorite movie?)
Out: What is your favorite movie? Harry Potter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to convert string output to integer use int() function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;In: int(input('How old are you?'))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Control structures&lt;/strong&gt; contain multiple statements and control the order of executing steps. The statements are organized in  blocks. The code usually contains multiple blocks. Each block is separated by colons and a four character (a tab) indentation (see example below). &lt;/p&gt;

&lt;p&gt;Proper indentation matters in Python. A missing colon or indentation can lead to the program not composing an output, producing an error, or printing more outputs than needed.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Types of control structures:&lt;/u&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Conditionals: if, elif, else&lt;/li&gt;
&lt;li&gt;Loops: for in, while&lt;/li&gt;
&lt;li&gt;Exception handling: try except&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;Forms of conditionals:&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one way: if&lt;/li&gt;
&lt;li&gt;2-way: if/else&lt;/li&gt;
&lt;li&gt;multiple: if/elif/else&lt;/li&gt;
&lt;li&gt;nested &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Control structure &lt;code&gt;if-else&lt;/code&gt; determines the flow of the program based on the value of the variable. In the below code, &lt;code&gt;stats&lt;/code&gt; variable value dictates if the second and third blocks of code are run. In this case they won't as our value falls under the first condition (&lt;code&gt;stats &amp;lt;= 20&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stats = 20 
if stats &amp;lt;= 20: 
    house = 'Hufflepuff' 
elif stats &amp;gt; 20 and stats &amp;lt; 40: 
    house = 'Slytherin' 
else: 
    if stats &amp;lt;= 60: 
        house = 'Ravenclaw'
    else: 
        house = 'Gryffindor' 
print('Welcome wizard! Your house will be '+ house + '!!!' ) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;elif&lt;/code&gt; statements reads as "if the previous conditions were not true, then try this condition". The &lt;code&gt;else&lt;/code&gt; statement covers the rest of values. Here it is a nested statement that further breaks up the condition. However, we can replace the first &lt;code&gt;else&lt;/code&gt; statement here with &lt;code&gt;elif&lt;/code&gt;. Please note nested statements get additional four character indentation. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Loops&lt;/strong&gt;&lt;br&gt;
There are two loops:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;for in&lt;/li&gt;
&lt;li&gt;while&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Loops iterate though each item in the list, string, etc. and performs an action on each item.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;In:  shoppinglist = ["apple", "banana", "cherry", "orange", 
     "kiwi", "melon"]

     for fruit in shoppinglist: 
         print(fruit)
Out: apple
     banana
     cherry
     orange
     kiwi
     melon
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;range()&lt;/code&gt; function is used to create a range of numbers that the &lt;code&gt;for&lt;/code&gt; loop iterate through. The first argument in this function is the starting number (included), followed by the last number - 1  and the last argument is the step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;In:  for i in range(1, 10, 2): 
         i += 2
         print(i)
Out: 3
     5
     7
     9
     11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;for&lt;/code&gt; loops will iterate through a certain number of values while &lt;code&gt;while&lt;/code&gt; loops will iterate until a certain condition is met or until they hit a &lt;code&gt;break&lt;/code&gt;. The &lt;code&gt;while&lt;/code&gt; loop will check if a condition holds true every time it goes through an iteration. If &lt;code&gt;while&lt;/code&gt; loop has been set to an infinite mode, the code will continue to run indefinitely. To stop the program press Ctrl C.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;In:  x = 0
     while x &amp;lt; 10: 
         print(x)
         x += 2
Out: 0
     2
     4
     6
     8

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Error Handling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are also controls in Python to handle exceptions or errors. That can be done with &lt;code&gt;try&lt;/code&gt; and &lt;code&gt;except&lt;/code&gt; blocks. When an error is encountered, &lt;code&gt;try&lt;/code&gt; block is halt and control has been transferred to the &lt;code&gt;except&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;all_num = []
while num != 'q': 
    try: 
        num = int(input('Type a number!'))
        all_num.append(num)
        print('Press "q" to quit!')
    except ValueError: 
        break
print(f' Your numbers: {all_num} \nAverage: \ 
       {sum(all_num)/len(all_num)}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Flow controls are essential computer science concepts that serve as a foundation for coding and writing functions. They are used under the hood for most Python libraries. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Conceptual Programming with Python by Thorsten Altenkirch and Isaac Triguero. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=6iF8Xb7Z3wQ"&gt;Python Tutorials for Beginners by Corey Schafer.&lt;/a&gt; &lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>python</category>
      <category>conditionals</category>
      <category>ifstatement</category>
      <category>basepython</category>
    </item>
    <item>
      <title>A Basic Guide to OLS</title>
      <dc:creator>rusalka013</dc:creator>
      <pubDate>Sat, 26 Mar 2022 18:13:42 +0000</pubDate>
      <link>https://dev.to/rusalka013/a-basic-guide-to-ols-8bk</link>
      <guid>https://dev.to/rusalka013/a-basic-guide-to-ols-8bk</guid>
      <description>&lt;p&gt;If you have ever run an OLS model from statsmodels for Linear Regression Line, you would naturally come to a question: "What the heck does all this mean?" &lt;/p&gt;

&lt;p&gt;The OLS summary can be intimidating as it presents not just R-squared score, but many test scores and statistics associated with Linear Regression model. This post is intended to demystify OLS and provide guidance to interpretation of its summary. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Background&lt;/strong&gt;&lt;br&gt;
Let's start with a background of Linear Regression and OLS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Linear regression&lt;/em&gt;&lt;/strong&gt; is a statistical method to calculate the relationships between predictors (also referred as features, independent variables (X)) and the target (dependent/response (y)) variable we are trying to predict. This relationship should have a linear relationships in order to use this model. &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                       Linear equation
             ŷ = β0 + β1x1 + β2x2 + β3x3 + βnxn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;ŷ (y-hat) - estimated value of target (response)&lt;br&gt;
β0 - value of y if x=0 (intercept/constant)&lt;br&gt;
x1, x2, x3, xn - predictors &lt;br&gt;
β1, β2, β3, βn - slope or coefficients of corresponding predictors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;OLS&lt;/em&gt;&lt;/strong&gt; stands for Ordinary Least Squares. Least Squares is referring to mathematical formula it is using to calculate errors. Since OLS model is non-robust it is sensitive to outliers. &lt;/p&gt;

&lt;p&gt;Importing libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import statsmodels.api as sm
from statsmodels.formula.api import ols
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Code to run OLS model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;y = df['price']
X = df.drop('price', axis=1)
base_model = sm.OLS(y, sm.add_constant(X)).fit()
base_model.summary()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Please note that capitalized OLS model doesn't automatically add intercept(constant), so you have to add it to X (independent variables)). &lt;/p&gt;

&lt;p&gt;Let's look at the example of OLS: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qg7ockak6o6ixrskouk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qg7ockak6o6ixrskouk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OLS Summary Interpretation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Left side of the top table lists model and data specs: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Dep. Variable&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Your target variable. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Model&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
An abbreviated version of Method.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Method&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Name of the technique/formula behind the model. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Date and Time&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Date/time the model was created. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;No. Observations&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Number of data entries (rows) in the dataset used for the model. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Df Residuals (Degrees of Freedom)&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
The number of observations - the number of parameters/features (including intercept/constant). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Df Model&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
The number of parameters (not including intercept). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Covariance Type&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
OLS is non-robust model which is sensitive to outliers. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Right side of the top table is for model performance (goodness of fit): &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;R-squared&lt;/strong&gt;&lt;/em&gt; (also known as coefficient of determination) determines model performance. How much the model explains the target. For instance if our model's R-squared = 0.75 it means that 75% of the target (price) is explained by this model. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;adj. R-squared&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Modified version of R-squared adjusted for the number of independent features used in the model. Since R-squared can't go down, but only can be =/&amp;gt; with every additional parameter, adj. R-squared penalizes the model for extra features. If  model has more than one independent feature then disregard R-Squared and use adj. R-squared (adjusted). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;F-statistic and Prob (F-statistic)&lt;/strong&gt;&lt;/em&gt; &lt;br&gt;
F-statistic is an indication on whether our findings are significantly important. Prior to running our model we assume that there is no correlation between predictors (X independent variables or features) and target (price). In other words, H0: no correlation and H1: there is correlation between predictors and target. Model assumes that alpha (significance level) is at 0.05. Since Prob (F-statistic) is 0.00 for our model, we reject H0 and accept H1. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Log-Likelihood&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Measures model fit. The higher the value, the better the model fits the data. The value can range from negative infinity to positive infinity. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;AIC&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
The Akaike information criterion (AIC) is a metric to compare the fit of regression models.&lt;br&gt;
AIC value is neither “good” or “bad” because AIC is being used to compare regression models. The model with the lowest AIC suggests the best fit. The AIC value on its own (when not compared to other models's AIC) is not important.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;BIC&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
The Bayesian Information Criterion is a metric that is also used to compare the fit of regression models. Similar to AIC the lowest score indicates the better fit model. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Middle table presents the coefficients report: &lt;br&gt;
Left column is listing constant and predictors used in a model. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;coef&lt;/strong&gt;&lt;/em&gt; &lt;br&gt;
Estimated value of a parameter. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;std err&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Standard deviation of coefficient. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;t&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
t-statistic used for calculating significance of a specific parameter. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;P&amp;gt;|t|&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Probability of t-statistic associated with a parameter. P value &amp;lt; 0.05 indicates that a parameter is significant to use in the model. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;[95.0% Coef. Interval]&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Interval/range of a coefficient. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Bottom table covers residuals, multicollinearity, and homoscedasticity: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Skew&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Indicates whether the errors are normally distributed. Skewness = 0 translates to a perfect symmetry. Negative skew indicates that errors are left skewed. Positive skew is that residuals distribution is right skewed.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Kurtosis&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Measures model peakiness. Kurtosis = 3 indicates normal distribution. Kurtosis &amp;gt; 3, higher peakiness.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Omnibus D'Angostino's test&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Test for skewness and kurtosis (or normality of residuals). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Prob(Omnibus)&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Probability of Omnibus D'Angostino's test statistic. H0: residual data is normally distributed. Alpha = 0.05. If p(JB) &amp;lt; 0.05 (alpha), reject H0. Residual data is not normally distributed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Jarque-Bera&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Another test for skewness and kurtosis (normality of residuals). H0: residual data is normally distributed. Alpha = 0.05. If JB ~&amp;gt;6 reject H0. If JB ~ 0, accept H0. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Prob(JB)&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Probability of Jarque-Bera statistic. If p(JB) &amp;lt; 0.05 (alpha), reject H0. Residual data is not normally distributed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Durbin-Watson&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Homoskedasticity test. Values should be between ~1.5 and ~2.5.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Cond. No&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Multicollinearity test. If parameters are related, you will get a note at the bottom of OLS reports like this: "The condition number is large, 7.53e+05. This might indicate that there are strong multicollinearity or other numerical problems."&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
To sum this topic up, the main statistics to look for are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;R-squared (or adj. R-squared if multiple features are used in model)&lt;/li&gt;
&lt;li&gt;Prob(F-statistic)&lt;/li&gt;
&lt;li&gt;Predictor coefficients for interpretation. &lt;/li&gt;
&lt;li&gt;P&amp;gt;|t|&lt;/li&gt;
&lt;li&gt;Prob(Omnibus), Prob(JB), Skew, Kurtosis for normality of residuals assumption. &lt;/li&gt;
&lt;li&gt;Durbin-Watson for homoskedasticity assumption.&lt;/li&gt;
&lt;li&gt;Cond. No. for multicollinearity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even though the summary provides an extensive report, it is still import to check Linear Regression assumptions with additional plots and tests. &lt;/p&gt;

&lt;p&gt;Resources: &lt;br&gt;
&lt;a href="https://www.statsmodels.org/devel/generated/statsmodels.regression.linear_model.OLS.html" rel="noopener noreferrer"&gt;https://www.statsmodels.org/devel/generated/statsmodels.regression.linear_model.OLS.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.statology.org/interpret-log-likelihood/" rel="noopener noreferrer"&gt;https://www.statology.org/interpret-log-likelihood/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>datascience</category>
      <category>ols</category>
      <category>linearregression</category>
    </item>
  </channel>
</rss>
