<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sandro</title>
    <description>The latest articles on DEV Community by sandro (@sandro).</description>
    <link>https://dev.to/sandro</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sandro"/>
    <language>en</language>
    <item>
      <title>Learn Tesseract</title>
      <dc:creator>sandro</dc:creator>
      <pubDate>Tue, 20 Aug 2019 21:28:21 +0000</pubDate>
      <link>https://dev.to/sandro/learn-tesseract-3m3l</link>
      <guid>https://dev.to/sandro/learn-tesseract-3m3l</guid>
      <description>&lt;h1&gt;
  
  
  Introduction to the Adam Optimization
&lt;/h1&gt;

&lt;p&gt;The Adam optimization algorithm is an extension to stochastic gradient descent that has recently seen broader adoption for deep learning applications in computer vision and natural language processing.&lt;br&gt;
(&lt;a href="https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/"&gt;https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/&lt;/a&gt;)&lt;/p&gt;

&lt;h1&gt;
  
  
  ...But, what is the stochastic gradient descent?
&lt;/h1&gt;

&lt;p&gt;Is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It is called stochastic because the method uses randomly selected (or shuffled) samples to evaluate the gradients, hence SGD can be regarded as a stochastic approximation of gradient descent optimization.&lt;br&gt;
(wiki: &lt;a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent"&gt;https://en.wikipedia.org/wiki/Stochastic_gradient_descent&lt;/a&gt;)&lt;/p&gt;

&lt;h1&gt;
  
  
  what do i do with this information?
&lt;/h1&gt;

&lt;p&gt;An RNN using LSTM units can be trained in a supervised fashion, on a set of training sequences, using an optimization algorithm, like &lt;strong&gt;gradient descent&lt;/strong&gt;, combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.&lt;br&gt;
(wiki: &lt;a href="https://en.wikipedia.org/wiki/Long_short-term_memory"&gt;https://en.wikipedia.org/wiki/Long_short-term_memory&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>tesseract</category>
      <category>ocr</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Tesseract Training</title>
      <dc:creator>sandro</dc:creator>
      <pubDate>Thu, 04 Jul 2019 22:33:05 +0000</pubDate>
      <link>https://dev.to/sandro/tesseract-training-49ji</link>
      <guid>https://dev.to/sandro/tesseract-training-49ji</guid>
      <description>&lt;h1&gt;
  
  
  Overview of Training Process
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract-4.00#introduction"&gt;https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract-4.00#introduction&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Conceptually the same:&lt;/p&gt;

&lt;p&gt;1 Prepare training text.&lt;br&gt;
2 Render text to image + box file. (Or create hand-made box files for existing image data.)&lt;br&gt;
3 Make unicharset file. (Can be partially specified, ie created manually).&lt;br&gt;
4 Make a starter traineddata from the unicharset and optional dictionary data.&lt;br&gt;
5 Run tesseract to process image + box file to make training data set.&lt;br&gt;
6 Run training on training data set.&lt;br&gt;
7 Combine data files.&lt;/p&gt;
&lt;h2&gt;
  
  
  The key differences are:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The boxes only need to be at the textline level. It is thus far easier to make training data from existing image data.&lt;/li&gt;
&lt;li&gt;The .tr files are replaced by .lstmf data files.&lt;/li&gt;
&lt;li&gt;Fonts can and should be mixed freely instead of being separate.&lt;/li&gt;
&lt;li&gt;The clustering steps (mftraining, cntraining, shapeclustering) are replaced with a single slow lstmtraining step.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Understanding the Various Files Used During Training
&lt;/h1&gt;

&lt;p&gt;As with base Tesseract, the completed LSTM model and everything else it needs is collected in the traineddata file. Unlike base Tesseract, a starter traineddata file is given during training, and has to be setup in advance. It can contain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Config file providing control parameters.&lt;/li&gt;
&lt;li&gt;Unicharset defining the character set.&lt;/li&gt;
&lt;li&gt;Unicharcompress, aka the recoder, which maps the unicharset further to the codes actually used by the neural network recognizer.&lt;/li&gt;
&lt;li&gt;Punctuation pattern dawg, with patterns of punctuation allowed around words.&lt;/li&gt;
&lt;li&gt;Word dawg. The system word-list language model.&lt;/li&gt;
&lt;li&gt;Number dawg, with patterns of numbers that are allowed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Bold elements must be provided. Others are optional, but if any of the dawgs are provided, the punctuation dawg must also be provided. A new tool: combine_lang_model is provided to make a starter traineddata from a unicharset and optional wordlists.&lt;/p&gt;
&lt;h1&gt;
  
  
  Making Box Files
&lt;/h1&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract-4.00#creating-training-data"&gt;https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract-4.00#creating-training-data&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Multiple formats of box files are accepted by Tesseract 4 for LSTM training, though they are different from the one used by Tesseract 3 (details).&lt;/p&gt;

&lt;p&gt;Each line in the box file matches a 'character' (glyph) in the tiff image.&lt;/p&gt;

&lt;p&gt;     &lt;br&gt;
Where      could be bounding-box coordinates of a single glyph or of a whole textline (see examples).&lt;/p&gt;

&lt;p&gt;To mark an end-of-textline, a special line must be inserted after a series of lines.&lt;/p&gt;



&lt;p&gt;Note that in all cases, even for right-to-left languages, such as Arabic, the text transcription for the line, should be ordered left-to-right. In other words, the network is going to learn from left-to-right regardless of the language, and the right-to-left/bidi handling happens at a higher level inside Tesseract.&lt;/p&gt;

&lt;p&gt;Using tesstrain.sh&lt;br&gt;
The setup for running tesstrain.sh is the same as for base Tesseract. Use --linedata_only option for LSTM training. Note that it is beneficial to have more training text and make more pages though, as neural nets don't generalize as well and need to train on something similar to what they will be running on. If the target domain is severely limited, then all the dire warnings about needing a lot of training data may not apply, but the network specification may need to be changed.&lt;/p&gt;

&lt;p&gt;Training data is created using tesstrain.sh as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;src/training/tesstrain.sh --fonts_dir /usr/share/fonts --lang eng --linedata_only \
  --noextract_font_properties --langdata_dir ../langdata \
  --tessdata_dir ./tessdata --output_dir ~/tesstutorial/engtrain
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The above command makes LSTM training data equivalent to the data used to train base Tesseract for English. For making a general-purpose LSTM-based OCR engine, it is woefully inadequate, but makes a good tutorial demo.&lt;/p&gt;

&lt;p&gt;Now try this to make eval data for the 'Impact' font:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;src/training/tesstrain.sh --fonts_dir /usr/share/fonts --lang eng --linedata_only \
  --noextract_font_properties --langdata_dir ../langdata \
  --tessdata_dir ./tessdata \
  --fontlist "Impact Condensed" --output_dir ~/tesstutorial/engeval
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We will use that data later to demonstrate tuning.&lt;/p&gt;

</description>
      <category>tesseract</category>
      <category>ocr</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
