<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alejandro Piad</title>
    <description>The latest articles on DEV Community by Alejandro Piad (@apiad).</description>
    <link>https://dev.to/apiad</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/apiad"/>
    <language>en</language>
    <item>
      <title>Introduction to Automated Machine Learning in Python with AutoGOAL</title>
      <dc:creator>Alejandro Piad</dc:creator>
      <pubDate>Thu, 16 Jul 2020 16:32:03 +0000</pubDate>
      <link>https://dev.to/apiad/introduction-to-automated-machine-learning-in-python-with-autogoal-45n4</link>
      <guid>https://dev.to/apiad/introduction-to-automated-machine-learning-in-python-with-autogoal-45n4</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;span&gt;Photo by &lt;a href="https://unsplash.com/@charlottemsk?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Charlotte Karlsen&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/mountain-climbing?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://autogoal.github.io"&gt;AutoGOAL&lt;/a&gt; is a novel Python framework for &lt;em&gt;Automated Machine Learning&lt;/em&gt;, also known as &lt;a href="https://en.wikipedia.org/wiki/Automated_machine_learning"&gt;AutoML&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AutoML
&lt;/h2&gt;

&lt;p&gt;AutoML is an exciting new field of machine learning that attempts to bridge the gap between highly complex machine learning techniques and non-experts. In other words, reducing the entry barrier to the world of machine learning for those of us who don't have the time and/or resources to learn all the intricacies of each algorithm but who need to solve real problems.&lt;/p&gt;

&lt;p&gt;There are a lot of flavours of AutoML, but from a very pragmatic point of view, you can think of it as the design of high-level tools that automate most of the machine learning process, from data preprocessing to model selection and parameter tuning. The underlying problem is that even though machine learning is very promising, getting a real machine learning algorithm to work with real data beyond academic examples is hard: you have to prepare the data, select one algorithm (or family of algorithms) and possible tune a bunch of very specific parameters, like regularization factors, number of neurons in a neural network layer, activation functions, and whatnot. There are simply too many options and it requires a non-trivial level of expertise to even understand what they mean and, worst, how they impact the final performance of your model. &lt;/p&gt;

&lt;p&gt;Ideally, getting machine learning to work should be as easy as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;machine_learning&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BlackMagic&lt;/span&gt;

&lt;span class="n"&gt;algorithm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;BlackMagic&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;algorithm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;learn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;my_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;    &lt;span class="c1"&gt;# freshly taken from your DB
&lt;/span&gt;&lt;span class="n"&gt;algorithm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# maybe even from the users? 
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Unfortunately, current machine learning tools are far from this ideal, but AutoML researchers are trying to get there. For this reason, there is a lot of academic research as well as buzz around AutoML right now. &lt;br&gt;
If you want a (quite technical) introduction, the &lt;a href="https://www.automl.org/book/"&gt;AutoML book&lt;/a&gt; is a great resource.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Actually, next Saturday, July 18th, our team will be presenting AutoGOAL's first iteration in the &lt;a href="https://sites.google.com/view/automl2020/home"&gt;AutoML Workshop&lt;/a&gt; collocated with the &lt;a href="//icml.cc"&gt;International Conference on Machine Learning (ICML)&lt;/a&gt;, one of the top academic conferences in machine learning. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, even though the field is young, there are plenty of &lt;a href="https://github.com/windmaple/awesome-AutoML#tools-and-projects"&gt;awesome AutoML tools&lt;/a&gt; already out there that you can use today. The most useful ones, at least from a new developer's perspective, are the ones that give you black-box machine learning solutions. &lt;/p&gt;

&lt;p&gt;If you've heard of AutoML before in the open-source world, chances are you've heard of &lt;a href="https://automl.github.io/auto-sklearn/master/"&gt;AutoSklearn&lt;/a&gt;, &lt;a href="https://github.com/automl/autoweka"&gt;AutoWeka&lt;/a&gt; or &lt;a href="https://autokeras.com/"&gt;AutoKeras&lt;/a&gt;. These are wonderful tools which, as their names might hint, act as wrappers on top of very well-known machine learning libraries to give you something like that ideal black-box algorithm. If you need out-of-the-box machine learning solutions &lt;em&gt;today&lt;/em&gt;, by all means, go look at these tools.&lt;/p&gt;
&lt;h2&gt;
  
  
  The AutoGOAL approach to AutoML
&lt;/h2&gt;

&lt;p&gt;AutoGOAL is a new library in this world that tries to appeal both high-level (non-expert) users and low-level (expert) users.&lt;/p&gt;

&lt;p&gt;For example, with AutoGOAL you can something like this (for real):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;autogoal.ml&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AutoML&lt;/span&gt; 
&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="c1"&gt;# load data
&lt;/span&gt;
&lt;span class="n"&gt;automl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AutoML&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;automl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Cool, isn't it? AutoGOAL will automatically search through a vast collection of different algorithms (things like logistic regression, decision trees, some neural networks) and find an optimal (or at least good enough) solution withing specified time and memory constraints. However, this is no silver bullet, there are &lt;em&gt;a lot&lt;/em&gt; of restrictions on what &lt;code&gt;X&lt;/code&gt; and &lt;code&gt;y&lt;/code&gt; must be. But, it is a step closer to that ideal.&lt;/p&gt;

&lt;p&gt;This is the &lt;strong&gt;high-level API&lt;/strong&gt;, a black-box &lt;code&gt;AutoML&lt;/code&gt; class that works with many different problem types, from image classification to entity recognition in text. Under the hood, AutoGOAL actually has wrappers to hundreds of different algorithms from &lt;code&gt;sklearn&lt;/code&gt;, &lt;code&gt;gensim&lt;/code&gt;, &lt;code&gt;nlkt&lt;/code&gt;, &lt;code&gt;pytorch&lt;/code&gt;, &lt;code&gt;keras&lt;/code&gt;, &lt;code&gt;spacy&lt;/code&gt;, and more. And this is the first difference between AutoGOAL and &lt;em&gt;most&lt;/em&gt; other similar tools. AutoGOAL doesn't really know about any specific API or library, nor has any machine learning implemented itself, it's a thin collection of wrappers compatible with virtually anything that even resembles a machine learning algorithm.&lt;/p&gt;

&lt;p&gt;So, if you install AutoGOAL now (&lt;code&gt;pip install autogoal&lt;/code&gt;) you will actually get only this thin layer. You actually &lt;strong&gt;have&lt;/strong&gt; to install &lt;code&gt;sklearn&lt;/code&gt;, &lt;em&gt;and/or&lt;/em&gt; &lt;code&gt;keras&lt;/code&gt;, &lt;em&gt;and/or&lt;/em&gt; the other libraries, and AutoGOAL will then discover those libraries and automatically use them. We are continuously adding new wrappers around the clock (&lt;code&gt;opencv&lt;/code&gt; is coming soon, for example).&lt;/p&gt;

&lt;h2&gt;
  
  
  Low-level API for fine control
&lt;/h2&gt;

&lt;p&gt;There are many details to AutoGOAL, but the main building blocks are based on the concept of defining classes or methods with &lt;strong&gt;type annotations&lt;/strong&gt; that indicate the space of parameter values. For example, suppose you want to try a logistic regression from &lt;code&gt;sklearn&lt;/code&gt; on some dataset. This is a basic code to instantiate and evaluate it on some random data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;sklearn.datasets&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;make_classification&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;sklearn.model_selection&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;train_test_split&lt;/span&gt;

&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;make_classification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Fixed seed for reproducibility
&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;sklearn.linear_model&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LogisticRegression&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;estimator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;iters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;scores&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;iters&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;train_test_split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.25&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;estimator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;estimator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;lr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;LogisticRegression&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# around 0.83
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So far so good, but maybe we could do better with a different set of parameters. Logistic regression has at least two parameters that influence heavily its performance: the penalty function and the regularization strength.&lt;/p&gt;

&lt;p&gt;Instead of writing a loop through a bunch of different parameters, we can use AutoGOAL to automatically explore the space of possible combinations. We can do this with the class-based API by providing annotations for the parameters we want to explore.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;autogoal.grammar&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Continuous&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Categorical&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;LR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LogisticRegression&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;penalty&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Categorical&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"l1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"l2"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Continuous&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
        &lt;span class="nb"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;penalty&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;penalty&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;solver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"liblinear"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;penalty: Categorical("l1", "l2")&lt;/code&gt; annotation tells AutoGOAL that for this class the parameter penalty can take values from a list of predefined values. Likewise, the &lt;code&gt;C: Continuous(0.1, 10)&lt;/code&gt; annotation indicates that the parameter &lt;code&gt;C&lt;/code&gt; can take a float value in a specified range.&lt;/p&gt;

&lt;p&gt;Now we will use AutoGOAL to automatically generate different instances of our &lt;code&gt;LR&lt;/code&gt; class automatically. We achieve this by building a &lt;a href="https://en.wikipedia.org/wiki/Context-free_grammar"&gt;context-free grammar&lt;/a&gt; that describes all possible instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;autogoal.grammar&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt;  &lt;span class="n"&gt;generate_cfg&lt;/span&gt;

&lt;span class="n"&gt;grammar&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;  &lt;span class="n"&gt;generate_cfg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LR&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;grammar&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is the output for &lt;code&gt;print(grammar)&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;LR&amp;gt;         := LR (penalty=&amp;lt;LR_penalty&amp;gt;, C=&amp;lt;LR_C&amp;gt;)
&amp;lt;LR_penalty&amp;gt; := categorical (options=['l1', 'l2'])
&amp;lt;LR_C&amp;gt;       := continuous (min=0.1, max=10)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Basically, AutoGOAL introspects the type annotations and builds a grammar that describes the space of all possible instances of the &lt;code&gt;LR&lt;/code&gt; class. We can now use AutoGOAL to search for the best instance, which will automatically try many different combinations of parameters intelligently (technically, it is using a probabilistic variant of an evolutionary algorithm called &lt;a href="https://en.wikipedia.org/wiki/Grammatical_evolution"&gt;Grammatical Evolution&lt;/a&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;autogoal.search&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PESearch&lt;/span&gt;

&lt;span class="n"&gt;optimizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;PESearch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;grammar&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;best&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After a few iterations, &lt;code&gt;best&lt;/code&gt; will be the best instance of &lt;code&gt;LR&lt;/code&gt; and &lt;code&gt;fn&lt;/code&gt; will be the actual value of &lt;code&gt;evaluate&lt;/code&gt; for that instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;Now, this can go much deeper. You can define a full hierarchy of classes, with parameters that are instances of other classes (even recursively instances of itself) and AutoGOAL will infer a proper grammar for that space and optimize in it.&lt;/p&gt;

&lt;p&gt;As long as you can define your problem as search for the best program (i.e., instances of classes with parameters) as measured by some function, AutoGOAL can help you out. &lt;a href="https://autogoal.github.io/examples/"&gt;In the docs&lt;/a&gt; you can find much more complex examples, both in state-of-the-art academic datasets as well as in problems that are not even related to machine learning. &lt;/p&gt;

&lt;p&gt;AutoGOAL is still an alpha stage and in active development. If you need a production-ready AutoML framework, there are alternatives with out-of-the-box solutions. But if you want to tinker around, AutoGOAL provides a great level of expressiveness and requires very little code. You can find it in &lt;a href="https://github.com/autogoal/autogoal"&gt;Github&lt;/a&gt; and in &lt;a href="https://hub.docker.com/r/autogoal/autogoal"&gt;Docker Hub&lt;/a&gt; preloaded with a bunch of machine learning libraries and &lt;a href="https://hub.docker.com/layers/autogoal/autogoal/gpu/images/sha256-d55773d964386ee4f3d51c4f9b3a9d63388078d2c9b3bcd8abb8ce5c12c96f49?context=explore"&gt;GPU support&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
    </item>
  </channel>
</rss>
