<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: I love integers</title>
    <description>The latest articles on DEV Community by I love integers (@yuhuishishishi).</description>
    <link>https://dev.to/yuhuishishishi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yuhuishishishi"/>
    <language>en</language>
    <item>
      <title>Learn NLP the dumb way</title>
      <dc:creator>I love integers</dc:creator>
      <pubDate>Wed, 31 May 2023 22:53:47 +0000</pubDate>
      <link>https://dev.to/yuhuishishishi/learn-nlp-the-dumb-way-7ea</link>
      <guid>https://dev.to/yuhuishishishi/learn-nlp-the-dumb-way-7ea</guid>
      <description>&lt;p&gt;I am preparing for interviews for machine learning engineer positions.&lt;br&gt;
So I am brushing up my skills in natural language and pytorch as well. It's been a while since I worked on intensive machine learning model development. &lt;br&gt;
The past few years have been mostly work on my own startup, doing fullstack engineering and platform development. &lt;/p&gt;

&lt;p&gt;My plan is to implement a series of classical NLP models in pytorch from scratch.&lt;br&gt;
To make things more appealing, I would also find common datasets on kaggle and see how these models perform on them. &lt;/p&gt;

&lt;p&gt;To be more specific, here is a list of models I am considering to implement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MLP &amp;amp; logistic regression&lt;/li&gt;
&lt;li&gt;text classification tasks -- RNN, LSTM, biLSTM&lt;/li&gt;
&lt;li&gt;NER task -- biLSTM + CRF&lt;/li&gt;
&lt;li&gt;machine translation -- transformer based architectures&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nlp</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>CostGPT - A cost estimator for Linode resource cost based on GPT model</title>
      <dc:creator>I love integers</dc:creator>
      <pubDate>Sat, 18 Feb 2023 21:00:00 +0000</pubDate>
      <link>https://dev.to/yuhuishishishi/costgpt-a-cost-estimator-for-linode-resource-cost-based-on-gpt-model-2aij</link>
      <guid>https://dev.to/yuhuishishishi/costgpt-a-cost-estimator-for-linode-resource-cost-based-on-gpt-model-2aij</guid>
      <description>&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;CostGPT is a natural language based cloud resource cost estimator.&lt;/p&gt;

&lt;h3&gt;
  
  
  Category Submission:
&lt;/h3&gt;

&lt;p&gt;Wacky Wildcard&lt;/p&gt;

&lt;h3&gt;
  
  
  App Link
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://costgpt.cloud" rel="noopener noreferrer"&gt;costgpt.cloud&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Screenshots
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx712z2ogjorev4zqddl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx712z2ogjorev4zqddl.png" alt="CostGPT screenshot" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Description
&lt;/h3&gt;

&lt;p&gt;CostGPT is built based on &lt;a href="https://openai.com/api/" rel="noopener noreferrer"&gt;Open AI's GPT API&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Users describe their application use case in natural language and CostGPT returns the estimated resources needed to run the application, and the estimated cost of each resource.&lt;/p&gt;

&lt;p&gt;For example, one user might say, &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I want to run an ecommerce website based on WooCommerce. The website has 200 SKUs and anticipates to have 200 transactions daily. The product detail pages have many images to display. I do not estimate the daily PV to exceed 10000."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;CostGPT will give an analysis on the resources needed to run the application, and why.&lt;br&gt;
It also summarizes the cost for each Linode cloud resources mentioned in the analysis.&lt;/p&gt;
&lt;h3&gt;
  
  
  Link to Source Code
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/yuhuishi-convect" rel="noopener noreferrer"&gt;
        yuhuishi-convect
      &lt;/a&gt; / &lt;a href="https://github.com/yuhuishi-convect/linode-cost-assistent" rel="noopener noreferrer"&gt;
        linode-cost-assistent
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;CostGPT&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;CostGPT is a natural language based cloud resource cost estimator.
It is built based on &lt;a href="https://openai.com/api/" rel="nofollow noopener noreferrer"&gt;Open AI's GPT API&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Users describe their cloud resource usage in natural language and CostGPT returns the estimated resources needed to run the application, and the estimated cost of each resource.&lt;/p&gt;
&lt;p&gt;See a &lt;a href="https://share.descript.com/view/jF153lCMvvP" rel="nofollow noopener noreferrer"&gt;demo video here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Try it out &lt;a href="https://costgpt.cloud/" rel="nofollow noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Usage&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Generate an API key from &lt;a href="https://openai.com/api/" rel="nofollow noopener noreferrer"&gt;Open AI's GPT API&lt;/a&gt;.
Copy the API key into a file at &lt;code&gt;backend/.env&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c1"&gt;echo&lt;/span&gt; &lt;span class="pl-s"&gt;&lt;span class="pl-k"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="pl-k"&gt;EOF&lt;/span&gt; &amp;gt; backend/.env&lt;/span&gt;
&lt;span class="pl-s"&gt;OPENAI_API_KEY=&amp;lt;your api key&amp;gt;&lt;/span&gt;
&lt;span class="pl-s"&gt;OPENAI_MODEL=text-davinci-003&lt;/span&gt;
&lt;span class="pl-s"&gt;&lt;span class="pl-k"&gt;EOF&lt;/span&gt;&lt;/span&gt;
&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Start the service locally.&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;docker-compose -f docker-compose.dev.yaml up

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; then visit http://localhost&lt;/span&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;License&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;The project is licensed under the Apache License 2.0.&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/yuhuishi-convect/linode-cost-assistent" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;h3&gt;
  
  
  Permissive License
&lt;/h3&gt;

&lt;p&gt;Apache 2.0&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;(I asked ChatGPT why I want to build such a product, here are its thoughts)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Easy to use: Not everyone is familiar with the technical jargon used in cloud computing, and so it can be challenging to understand how much different resources cost. A natural language-based cost estimator like CostGPT can make it much easier for non-technical users to understand the costs of different resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Time-saving: With a natural language-based cost estimator, users don't have to spend time researching the cost of different resources or creating complex spreadsheets to estimate the cost. They can simply ask CostGPT for an estimate and get an accurate answer in a matter of seconds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost-effective: By having a better understanding of the costs of different cloud resources, users can make more informed decisions about which resources to use and how to optimize their usage. This can lead to significant cost savings over time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accessibility: A natural language-based cost estimator can make cloud computing more accessible to a wider range of people, including those who may not have a technical background or who may have disabilities that make it difficult to use traditional cost estimation tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: As organizations scale their cloud infrastructure, it becomes more challenging to manage costs. A natural language-based cost estimator like CostGPT can help organizations estimate and manage costs as their cloud usage grows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accuracy: With a natural language-based cost estimator, there is less risk of human error in calculating costs. CostGPT can provide accurate and consistent estimates every time, reducing the risk of unexpected costs or budget overruns.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How I built it
&lt;/h3&gt;

&lt;p&gt;I checked on the detailed pricing page of Linode, and scanned through its offerings. I find the pricing very easy to understand and straightforward (compared with my previous experience with AWS, GoogleCloud). There is almost no hidden cost.&lt;/p&gt;

&lt;p&gt;The offerings are clear and lean, too. You have the very essential blocks to build a modern app. &lt;/p&gt;

&lt;p&gt;I scraped those pricing and offering into tables, and later provided them as part of the prompt sent to the GPT API. &lt;/p&gt;

&lt;p&gt;I designed the prompt to ask the GPT model to provide an architecture design using the resources provided by Linode.&lt;/p&gt;

&lt;p&gt;After getting an application architecture, I designed the second prompt to ask the GPT model to give a summarization in the table form of the resources and their costs. &lt;/p&gt;

&lt;p&gt;The UI is responsible to render this information to the user.&lt;/p&gt;

&lt;p&gt;This is the first time I am building with Open AI's API, and doing some work with the so-called &lt;em&gt;prompt engineering&lt;/em&gt;. I find it that to get the desired results, you need careful design of the prompt and choose what information to include in the prompt. &lt;/p&gt;

&lt;p&gt;One prompt also may not be enough. It may also require a chain of different prompts to get what you require.&lt;/p&gt;

&lt;p&gt;It also gives me the chance to learn Linode's offering, which I highly appreciate the clearness and straightforwardness. &lt;/p&gt;

&lt;p&gt;The whole application is deployed on a Linode compute node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Resources/Info
&lt;/h3&gt;

&lt;p&gt;Linode already has a &lt;a href="https://cloud-estimator.linode.com/s/" rel="noopener noreferrer"&gt;cost calculator&lt;/a&gt;, which is pretty neat.&lt;/p&gt;

&lt;p&gt;But sometimes for non-tech audience and inexperienced engineers, it might hard to decide the right "Tshirt size" to run their applications.&lt;/p&gt;

&lt;p&gt;I hope it might be useful for people to get an initial understanding on the cloud cost by having a product like CostGPT&lt;/p&gt;

</description>
      <category>top7</category>
      <category>gratitude</category>
    </item>
    <item>
      <title>JupyterHub addons for microk8s</title>
      <dc:creator>I love integers</dc:creator>
      <pubDate>Mon, 23 May 2022 04:58:16 +0000</pubDate>
      <link>https://dev.to/yuhuishishishi/jupyterhub-addons-for-microk8s-d1e</link>
      <guid>https://dev.to/yuhuishishishi/jupyterhub-addons-for-microk8s-d1e</guid>
      <description>&lt;p&gt;I find &lt;a href="https://microk8s.io/"&gt;microk8s&lt;/a&gt; is pretty to use for dev and light-usage environment like a small lab or team of data scientists. &lt;/p&gt;

&lt;p&gt;I wrote an addon for it to quickly let people to install jupyterhub to a microk8s cluster with one command.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--566lAguM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/yuhuishi-convect"&gt;
        yuhuishi-convect
      &lt;/a&gt; / &lt;a href="https://github.com/yuhuishi-convect/jhub-addon"&gt;
        jhub-addon
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Jupyterhub addon for microk8s
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
JupyterHub addon for Microk8s&lt;/h1&gt;
&lt;h2&gt;
Usage&lt;/h2&gt;
&lt;p&gt;Please make sure you have &lt;code&gt;dns&lt;/code&gt; addon enabled and a default &lt;code&gt;StorageClass&lt;/code&gt; exists in your microk8s cluster.
For convenience, you can use&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;microk8s &lt;span class="pl-c1"&gt;enable&lt;/span&gt; dns
microk8s &lt;span class="pl-c1"&gt;enable&lt;/span&gt; hostpath-storage&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;To visit jupyterhub externally, you can enable &lt;code&gt;ingress&lt;/code&gt; and &lt;code&gt;metallb&lt;/code&gt; addons.&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;microk8s &lt;span class="pl-c1"&gt;enable&lt;/span&gt; ingress
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; the ip pool varies depending on your setup. see https://microk8s.io/docs/addon-metallb for more info&lt;/span&gt;
microk8s &lt;span class="pl-c1"&gt;enable&lt;/span&gt; metallb:10.64.140.43-10.64.140.49&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;To use jupyterhub&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Add repo&lt;/span&gt;
microk8s addons repo add https://github.com/yuhuishi-convect/jhub-addon

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Enable jupyterhub&lt;/span&gt;
microk8s &lt;span class="pl-c1"&gt;enable&lt;/span&gt; jupyterhub

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Alternatively you can provide a values.yaml to configure jupyterhub&lt;/span&gt;
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; See https://zero-to-jupyterhub.readthedocs.io/en/latest/resources/reference.html for a comprehensive reference&lt;/span&gt;
microk8s &lt;span class="pl-c1"&gt;enable&lt;/span&gt; jupyterhub -- -f /path/to/values.yaml

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; to view the ip address to for the jupyterhub service&lt;/span&gt;
microk8s kubectl get svc -n jhub -l app=jupyterhub,component=proxy-public

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; disable the addon&lt;/span&gt;
microk8s disable jupyterhub&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
Configuration&lt;/h2&gt;
&lt;p&gt;You can supply a &lt;code&gt;values.yaml&lt;/code&gt; to use with the addon by&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;microk8s &lt;span class="pl-c1"&gt;enable&lt;/span&gt; jupyterhub -- -f /path/to/values.yaml&lt;/pre&gt;

&lt;/div&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/yuhuishi-convect/jhub-addon"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


</description>
      <category>kubernetes</category>
      <category>datascience</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Jaf - Deploy Jupyter Notebook as an Azure Function</title>
      <dc:creator>I love integers</dc:creator>
      <pubDate>Mon, 07 Mar 2022 22:50:56 +0000</pubDate>
      <link>https://dev.to/yuhuishishishi/jaf-deploy-jupyter-notebook-as-an-azure-function-4ipm</link>
      <guid>https://dev.to/yuhuishishishi/jaf-deploy-jupyter-notebook-as-an-azure-function-4ipm</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;jaf stands for Jupyter notebook as functions.&lt;br&gt;
jaf skaffolds an Azure Duration Function project to enable a jupyter notebook be deployed as a web endpoint.&lt;/p&gt;

&lt;p&gt;Jupyter notebooks are widely used as interactive programming environment to experiment data science and machine learning ideas. However, going from development to deployment once a notebook is drafted is extremely hard. The process usually involves an engineer to rewrite significant portion of the jupyter notebook as a web/rpc service in order to make the model/procedure written in the notebook to be callable to other services.&lt;/p&gt;

&lt;p&gt;With jaf, you can quickly spin up a web service that makes the jupyter notebook as a callable function, thus expediting greatly the speed to come from dev to deployment for data science solutions.&lt;/p&gt;
&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;Wacky Wildcards&lt;/p&gt;
&lt;h3&gt;
  
  
  Link to Code on GitHub
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/yuhuishi-convect" rel="noopener noreferrer"&gt;
        yuhuishi-convect
      &lt;/a&gt; / &lt;a href="https://github.com/yuhuishi-convect/jaf" rel="noopener noreferrer"&gt;
        jaf
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Jupyter notebook as functions. Project for Azure trial hackathon
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;jaf&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;jaf stands for Jupyter notebook as functions. This is a project for &lt;a href="https://dev.to/devteam/hack-the-microsoft-azure-trial-on-dev-2ne5" rel="nofollow"&gt;Azure trial hackathon&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;jaf skaffolds an Azure Duration Function project to enable a jupyter notebook be deployed as a web endpoint.&lt;/p&gt;
&lt;a href="https://www.loom.com/share/e26b21e1b5b94e589a274faf77863d4a" rel="nofollow noopener noreferrer"&gt;
    &lt;img src="https://camo.githubusercontent.com/ef98b72612e1bebad46556505077d0e8056f8cac3dda8c00cb2674cc17bfd144/68747470733a2f2f63646e2e6c6f6f6d2e636f6d2f73657373696f6e732f7468756d626e61696c732f65323662323165316235623934653538396132373466616637373836336434612d776974682d706c61792e676966"&gt;
  &lt;/a&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Usage&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Prerequisites&lt;/h3&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Azure cli&lt;/li&gt;
&lt;li&gt;Azure functions core tools&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Skaffold a project&lt;/h3&gt;

&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;pip install cookiecutter

cookiecutter https://github.com/yuhuishi-convect/jaf

project_name [JupyterAsFunction]: HelloWorld
notebook_url [https://raw.githubusercontent.com/nteract/papermill/main/binder/cli-simple/simple_output.ipynb]: 
&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Test locally&lt;/h3&gt;

&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; start the local runtime&lt;/span&gt;

$ func start 
Found Python version 3.8.5 (python3).

Azure Functions Core Tools
Core Tools Version:       4.0.3971 Commit hash: d0775d487c93ebd49e9c1166d5c3c01f3c76eaaf  (64-bit)
Function Runtime Version: 4.0.1.16815


Functions:

        PapermillTrigger: [POST] http://localhost:7071/api/HelloWorld

        PapermillDurableOrc: orchestrationTrigger

        PapermillRunnotebook: activityTrigger

For detailed output, run func with --verbose flag.
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
      Request starting HTTP/2 POST http://127.0.0.1:46327/AzureFunctionsRpcMessages.FunctionRpc/EventStream application/grpc -
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
      Executing endpoint &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;gRPC - /AzureFunctionsRpcMessages.FunctionRpc/EventStream&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;
[2022-03-07T20:00:52.941Z] Worker process started and initialized.
[2022-03-07T20:00:58.149Z] Host lock lease acquired by instance ID &lt;span class="pl-s"&gt;&lt;span class="pl-pds"&gt;'&lt;/span&gt;000000000000000000000000EBCD312A&lt;span class="pl-pds"&gt;'&lt;/span&gt;&lt;/span&gt;.&lt;/pre&gt;

&lt;/div&gt;

Call the endpoint 
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; we can specify the parameters in the&lt;/span&gt;&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/yuhuishi-convect/jaf" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;p&gt;[Note:] # Screenshots/demo videos are encouraged!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.loom.com/share/e26b21e1b5b94e589a274faf77863d4a" rel="noopener noreferrer"&gt; Demo video:  &lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;img style="max-width:300px;" src="https://cdn.loom.com/sessions/thumbnails/e26b21e1b5b94e589a274faf77863d4a-with-play.gif"&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>azuretrialhack</category>
      <category>jupyter</category>
      <category>serverless</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Search Job Postings on HackerNews</title>
      <dc:creator>I love integers</dc:creator>
      <pubDate>Thu, 13 Jan 2022 22:06:03 +0000</pubDate>
      <link>https://dev.to/yuhuishishishi/search-job-postings-on-hackernews-bmk</link>
      <guid>https://dev.to/yuhuishishishi/search-job-postings-on-hackernews-bmk</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;Hacker Job Search let you search job postings on HackerNews.&lt;br&gt;
It used Realm Scheduled Triggers and Functions to scrape job postings from HackerNews at an hour interval.&lt;/p&gt;

&lt;p&gt;Atlas Search index is built on top to enable searching by keyword functionalities. &lt;/p&gt;

&lt;p&gt;The searching endpoint is exposed as HTTPS endpoints with Realm Functions to serve incoming requests from frontend&lt;/p&gt;

&lt;p&gt;The frontend website is hosted by Realm Hosting.&lt;/p&gt;

&lt;p&gt;So basically, the entire stack is run on Atlas and Realm.&lt;/p&gt;
&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;Automation Innovation&lt;/p&gt;
&lt;h3&gt;
  
  
  Link to Code
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--566lAguM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/yuhuishi-convect"&gt;
        yuhuishi-convect
      &lt;/a&gt; / &lt;a href="https://github.com/yuhuishi-convect/hacker-job-search"&gt;
        hacker-job-search
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://hacker-job-geiwg.mongodbstitch.com/"&gt;Demo Site&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.loom.com/share/43df624c66ae46489bf6285fbc027868"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pv6olKPl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn.loom.com/sessions/thumbnails/43df624c66ae46489bf6285fbc027868-with-play.gif" alt="demo video" width="609" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>atlashackathon</category>
      <category>react</category>
      <category>mongodb</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Detexify in your browser</title>
      <dc:creator>I love integers</dc:creator>
      <pubDate>Tue, 05 Jan 2021 06:39:42 +0000</pubDate>
      <link>https://dev.to/yuhuishishishi/detexify-in-your-browser-43k</link>
      <guid>https://dev.to/yuhuishishishi/detexify-in-your-browser-43k</guid>
      <description>&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://detexify.kirelabs.org/classify.html" rel="noopener noreferrer"&gt;Detexify&lt;/a&gt; is a well-known service to recognize hand drawn math symbols and translate them to LaTeX expressions, so one who works with LaTeX not need to dive into &lt;a href="http://www.ctan.org/tex-archive/info/symbols/comprehensive/" rel="noopener noreferrer"&gt;symbols-a4.pdf&lt;/a&gt; that you just can't memorize. &lt;/p&gt;

&lt;p&gt;I have built a replicate of this service using ML and deploy it as a browser-side application, i.e., it does not have a backend. The recognition and search purely runs in your browser.&lt;/p&gt;

&lt;h3&gt;
  
  
  Category Submission:
&lt;/h3&gt;

&lt;p&gt;Program for the People&lt;/p&gt;

&lt;h3&gt;
  
  
  App Link
&lt;/h3&gt;

&lt;p&gt;&lt;a href="//detexify.yuhuishi.me"&gt;detexify.yuhuishi.me&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Screenshots
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffjph16hpfp7hq3089lly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffjph16hpfp7hq3089lly.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxl5iymb7geop700iarn3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxl5iymb7geop700iarn3.png" alt="screen2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Description
&lt;/h3&gt;

&lt;p&gt;Users can draw a math symbol they cannot recall what the corresponding LaTeX expression is. The model will try to recognize the most likely symbols given the pixels, and show the LaTeX expressions together with their probabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Source Code
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/Yuhuishishishi/detexify-replicate" rel="noopener noreferrer"&gt;https://github.com/Yuhuishishishi/detexify-replicate&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Permissive License
&lt;/h3&gt;

&lt;p&gt;MIT License&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;When I was doing my graduate school and writing in LaTeX, finding the LaTeX expression for a math symbol is the most time consuming and unpleasant thing I had experienced. &lt;br&gt;
One can dive into the doc &lt;a href="http://www.ctan.org/tex-archive/info/symbols/comprehensive/" rel="noopener noreferrer"&gt;symbols-a4.pdf&lt;/a&gt;  and search among hundreds of math symbols to find out the correct expression for the one in your mind. &lt;/p&gt;

&lt;p&gt;Or one can go directly to &lt;a href="//detexify.yuhuishi.me"&gt;detexify.yuhuishi.me&lt;/a&gt; and draw it and let the ML model help you find the expression for it.&lt;/p&gt;

&lt;p&gt;Hope this application can help people who need to work with LaTeX to make their life bit easier.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I built it
&lt;/h3&gt;

&lt;p&gt;The application is a pure front-end app (there is no backend). &lt;/p&gt;

&lt;h4&gt;
  
  
  Model training
&lt;/h4&gt;

&lt;p&gt;For the ML part, I trained a recognition model that takes in a user drawn symbol and outputs the LaTeX expression. &lt;/p&gt;

&lt;p&gt;The model is trained using the &lt;a href="https://www.kaggle.com/dayeye2006/cnn-image-classification-hasyv2" rel="noopener noreferrer"&gt;Kaggle notebook&lt;/a&gt;.&lt;br&gt;
Under the hood, the model is a simple CNN learner. &lt;/p&gt;

&lt;p&gt;The data the model is trained on consists of a bunch of labeled math symbol images. You can find more details here &lt;a href="https://www.kaggle.com/martinthoma/hasyv2-dataset-friend-of-mnist" rel="noopener noreferrer"&gt;HASYv2 Dataset&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Overall, the model can achieve 60% accuracy. I haven't spent much time on tuning and finding a better architecture.&lt;/p&gt;

&lt;h4&gt;
  
  
  Deployment
&lt;/h4&gt;

&lt;p&gt;I built the front-end using React. This is the first-ever React app I have built since I have no previous experience in it. &lt;/p&gt;

&lt;p&gt;To serve the model, I used tensorflow.js to convert it into a javascript-friendly format and load it into the app. The inference runs purely in the browser. &lt;/p&gt;

&lt;p&gt;I used DO app platform to deploy the whole application. Since there is no backend involved, the deployment is quite straightforward (deployed as a static site). &lt;/p&gt;

</description>
      <category>dohackathon</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
    </item>
  </channel>
</rss>
