<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: marcomaggiotti</title>
    <description>The latest articles on DEV Community by marcomaggiotti (@marcomaggiotti).</description>
    <link>https://dev.to/marcomaggiotti</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/marcomaggiotti"/>
    <language>en</language>
    <item>
      <title>Use cases for Langchain in your business</title>
      <dc:creator>marcomaggiotti</dc:creator>
      <pubDate>Sat, 16 Mar 2024 22:04:53 +0000</pubDate>
      <link>https://dev.to/marcomaggiotti/use-cases-for-langchain-in-your-business-2c35</link>
      <guid>https://dev.to/marcomaggiotti/use-cases-for-langchain-in-your-business-2c35</guid>
      <description>&lt;p&gt;&lt;strong&gt;Unlocking LangChain's Power for Financial Institutions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's explore the business opportunities of GenAI, LLms and a framework like Langchain.&lt;br&gt;
Let's look at some categories of use-cases for the Large Language Models linked to real world scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  The four pillars of AI toolkits
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fea6qrl747z7cwdgw940m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fea6qrl747z7cwdgw940m.jpg" alt="GenAI toolkit principles, Langchain, python programming, summarizing, inferring, transforming, expanding" width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Summarizing&lt;/strong&gt;&lt;br&gt;
Think about at the huge amount of documentation collected in every company, not only simple word files but excel, pdf, video, images, confluence page. Large Language Models can be used to summarize text, images or videos. It can help in saving time and gaining insights from a large volume of content centralizing the search in a single endpoint. These models can be used to summarize documentation of a project, meetings reviews or customer feedbacks, and actions can be automated based on the responses, customize the desired result.&lt;br&gt;
We can create prompts to analyze multiple reviews of the same product or multiple reviews of multiple products. We can also focus on specific aspects of input data for summarization. For example, through prompt engineering, we can summarize customer feedbacks to find about product quality or shipping and handling or operational issues, etc. and automate processing of these feedbacks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inferring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Inference involves analyzing input data to extract labels, names, or sentiment. Large Language Models can be used to process data to identify specific entities. They can identify customer sentiments from reviews, topics from articles, etc.&lt;br&gt;
In the example of customer feedbacks, we can create prompts to identify if a customer has posted a feedback about a particular product and build automation around it to route it to specific department.&lt;br&gt;
The benefit of Large Language Models is that we don't need to collect training data, train a model or maintain it. We can perform the task of inference with just some prompt engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transforming&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Large Language Models can be used for transforming text they can perform language translation on real/time by request, spelling and grammar correction, format transformation, and tone transformation, all with just the help of prompts. This capability can be used for internationalization of a website, to recognize offensive words, as an example. Also, Large Language Models can be used in communications and respond to customers more effectively. &lt;br&gt;
Transformation is a very versatile capability of Large Language Models which can have multiple use-cases.&lt;br&gt;
From a technical perspective this could be used also to automatically translate encoded text into human readble text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expanding (Augmentation)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Expanding is the task of taking a shorter piece of data, and having generate a longer piece of data from it. It is also named as "augmentation" a tecnique used to increase the number of training or testing material that the model has asvailable. &lt;br&gt;
It takes the tokens from the input and expand or generate tokens that'd follow the ones in the input. This is a very powerful category of use-cases of these models.&lt;br&gt;
We can create prompts to generate content using the Large Language Models. It can be used to create articles, blogs, stories, poems, emails, images, videos, etc. It can have a wide variety of applications, like analyzing healthcare literature and helping with diagnosis, providing students with detailed explanations and answering questions in real-time, helping customers with query resolutions and interact through chatbots, role-playing and helping with brainstorming or interview preparations, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Specific Real Use-Cases in everyday business
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftze3t4r4ainzn8qhab8n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftze3t4r4ainzn8qhab8n.png" alt="Usecases for business regarding large language models in finance and banking" width="800" height="649"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having a framework that is able to seamlessly connecting language models with diverse data sources, is obviovsly powerful. &lt;/p&gt;

&lt;p&gt;Its intuitive features allow financial managers to be able composing customized tools that leverage its capabilities without the need for intricate technical engineering expertise. Let's just think about how could be important to have the last updated stock data organized in an autogenerated and customized report, automated for the everydays. &lt;/p&gt;

&lt;p&gt;Let's delve deeper into some practical use cases, with vivid examples and extensive benefits, specifically crafted for the dynamic field of finance:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In-depth Document Analysis&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;: &lt;br&gt;
Imagine a mortgage department in a bank seeking insights from a vast array of loan documents. A GenAI tool can facilitates the development of language-powered applications that can generate answers to specific questions about these documents.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Benefits&lt;/em&gt;: &lt;br&gt;
This empowers financial institutions to efficiently retrieve critical information, cite sources accurately for compliance purposes, and semantically search through intricate financial reports for nuanced details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chatbots for Enhanced Customer Interaction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;:&lt;br&gt;
Consider a scenario where a bank implements a chatbot to handle routine customer inquiries. &lt;/p&gt;

&lt;p&gt;LangChain simplifies the process by seamlessly integrating :  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;language models; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;conversation templates;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;memory components. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What this means ? &lt;br&gt;
The most repetitive actions could be collected and performed by a chatbot, for example like answering to questions regarding tax filling or how to create additional foreign currency account. &lt;br&gt;
Can help having an assistant that drive you defining IBAN and accounts with the correct format for CH, US or Asia, write Swift messages like MT940/1/2/50 that you can use for testing or compare the results.&lt;/p&gt;

&lt;p&gt;Nowdays all those actions are performed by a human agent, a tester or  a developer after a request of a meeting from the customers.&lt;br&gt;
Can create sentiment analysis to identify audience for you, Social media post creation and helping targeting, tiktok, linkedin, Facebook, instagram. reducing the cost to running those campaign.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Benefits&lt;/em&gt;:&lt;br&gt;
 Finance professionals can create chatbots with distinct personalities tailored to different scenarios. For instance, a chatbot can serve as a helpful banking assistant, guiding customers through transaction processes or as an informative financial advisor providing insights on investment opportunities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamlined Text Summarization for Reports&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;:&lt;br&gt;
This is covered under the umbrella of &lt;br&gt;
Financial analysts dealing with extensive reports, such as quarterly financial summaries, can use LangChain and GenAi tools for efficient summarization. &lt;br&gt;
The framework breaks down complex documents into digestible chunks, can identify concept inside documents and give you the exact position or seek for similar documents to investigate deeper the topics, this could significantly aiding in decision-making processes.&lt;br&gt;
&lt;em&gt;Benefits&lt;/em&gt;: &lt;br&gt;
This functionality enhances productivity by quickly summarizing lengthy financial reports, newsletters, or customer feedback. It allows financial managers to focus on key insights and trends without getting bogged down by extensive documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured Information Extraction from Unstructured Text&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;: &lt;br&gt;
Extracting relevant details from unstructured financial documents, like contracts or legal agreements. It enables the conversion of sentences into structured rows suitable for databases. A defined cleaning process is mandatory when working with machine learning and there is an important need to refine and correct wrong data that will be inserted in the data storage.&lt;br&gt;
&lt;em&gt;Benefits&lt;/em&gt;: &lt;br&gt;
Financial institutions can seamlessly insert extracted data into databases, transform lengthy legal documents into multiple rows for efficient storage, and accurately identify API parameters from user queries, ensuring precision and compliance.&lt;br&gt;
A tidy up database will improve the correctness of the results and facilitate the effort to eliminate allucinations for future uses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Autonomous Agents for Operational Efficiency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;: &lt;br&gt;
Picture a scenario where a financial institution deploys an app to automate routine tasks. This could be an AI plugin retrieving real-time financial data or a context-aware agent assisting in sales interactions.&lt;br&gt;
&lt;em&gt;Benefits&lt;/em&gt;: &lt;br&gt;
Financial professionals can build custom AI plugins to retrieve information from various tools, integrate existing modules for diverse applications like shopping, travel, or marketing, and create agents that understand and respond to customer inquiries in a personalized and efficient manner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Effective Evaluation of Language Model Output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;: &lt;br&gt;
Assessing the quality of outputs generated by language models can be a challenge. LangChain addresses this with tools like Tracing and community datasets for robust evaluation.&lt;br&gt;
The flexibility given by a framework binded with a popular programming language like python is powerful and solution driven. A (pre-post)processing data phase can always be added in any moment if you want to have more accuracy and traceability on the data inserted and produced.&lt;br&gt;
&lt;em&gt;Benefits&lt;/em&gt;: &lt;br&gt;
Financial institutions can thoroughly evaluate generative models, assess the performance of API chains such as OpenAPI, and ensure accurate results in question-answering tasks related to financial documents, vector databases, and SQL databases.&lt;br&gt;
For a single use and &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Efficient Querying of Tabular Financial Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;: &lt;br&gt;
Consider a financial analyst needing to query extensive tabular data for market trends. LangChain offers solutions like document loaders and predefined chains for efficient querying of structured data.&lt;br&gt;
&lt;em&gt;Benefits&lt;/em&gt;: &lt;br&gt;
Financial professionals can effortlessly load and index data using tools like CSVLoader, start with simple queries using predefined chains, and scale up to handle complex databases with powerful agents. This ensures timely access to critical information for informed decision-making.&lt;br&gt;
LangChain's user-friendly approach empowers finance professionals to unlock the full potential of language models, significantly enhancing operational efficiency, compliance, and decision-making processes within the dynamic financial landscape.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>automation</category>
      <category>llm</category>
    </item>
    <item>
      <title>Enhancing Culinary Experiences with Firebase and LangChain</title>
      <dc:creator>marcomaggiotti</dc:creator>
      <pubDate>Tue, 21 Nov 2023 12:31:45 +0000</pubDate>
      <link>https://dev.to/marcomaggiotti/enhancing-culinary-experiences-with-firebase-and-langchain-a-practical-guide-40b7</link>
      <guid>https://dev.to/marcomaggiotti/enhancing-culinary-experiences-with-firebase-and-langchain-a-practical-guide-40b7</guid>
      <description>&lt;p&gt;&lt;strong&gt;What we are going to do here :&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Integrate Firebase into our Python app.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Demonstrate how to anonymize confidential data and leverage its value with Langchain.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Store and post-process the OpenAI results.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the goal of unlocking the value hidden within the company's data, our aim is to repurpose it and integrate it with OpenAI. The intention is to connect this data to OpenAI, allowing for the extraction of insights and storing the results in a real-time database. This process enables the generation of new data based on the existing information we possess.&lt;/p&gt;

&lt;p&gt;In the constantly evolving realm of technology, companies are increasingly using data to create personalized and innovative solutions. This article demonstrates a practical example of combining Firebase, a real-time NoSQL database, with LangChain, a conversational AI language model, to enhance culinary experiences. We will explore the provided code, highlighting the practicality of Firebase and LangChain for managing data, generating insights, and making customized requests to OpenAI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69cfzif43ivmcd3nsdx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69cfzif43ivmcd3nsdx6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As explained by my amazing drawing talent, we have 3 different phases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pre, where the data is queried from the DB, anonymized, and prepared to be processed.&lt;/li&gt;
&lt;li&gt;LLM, the request to the OpenAI API.&lt;/li&gt;
&lt;li&gt;Post, the refinement of the result and storing it in the DB for future uses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Set on "Fire" the base: Firebase Initialization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our journey commences with the setup of Firebase. The code utilizes the Firebase Admin SDK to initialize the application with credentials and a database URL. This process is analogous to providing our system with a home and ensuring it knows where to store and retrieve data. In this scenario, the database serves as a centralized hub for culinary preferences, information such as allergies, and customer intolerance at the restaurant.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to Write to Firebase Realtime Database Using Python&lt;/strong&gt;&lt;br&gt;
If we want to have a place where we need to fish data and store the result of the openai requests, we need a database, for easy use we use Firebase realtime db.&lt;/p&gt;

&lt;p&gt;The immediate next step is to find out how we can connect to our database using Python. We are going to use the Admin Database API. You'll need to install the required library.&lt;/p&gt;

&lt;p&gt;For more information on using firebase_admin for Python, check out the official docs linked here.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install firebase_admin&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.freecodecamp.org/news/how-to-get-started-with-firebase-using-python/" rel="noopener noreferrer"&gt;https://www.freecodecamp.org/news/how-to-get-started-with-firebase-using-python/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50isynd7mhv1yrcn5hmi.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50isynd7mhv1yrcn5hmi.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will generate and download a file that will be used in your python app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#prepare the firebase env
cred = credentials.Certificate("./bookin.json")
default_app = firebase_admin.initialize_app(cred, {
    'databaseURL':'[HERE YOU NEED TO PUT YOUR DATABASEURL]'
    })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Adaptive AI with LangChain: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The function consultant_chef acts as a culinary advisor, powered by Langchain as a prompt engineering tool for natural language conversations. It takes inputs such as food intolerances, preferred style, and mandatory ingredients. LangChain then crafts a conversation between the system and a hypothetical customer, generating a personalized recipe suggestion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Firebase: A Data Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Firebase isn't just an initialization step; it's the backbone of data management. The code demonstrates how to fill the Firebase database with customer information, including names, intolerances, preferred styles, and desired ingredients. This structured data allows for easy retrieval and manipulation, setting the stage for personalized interactions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bringing Data to Life&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The code interact with the data storage and brings data to life by generating personalized recipes based on stored preferences opf the customer. It retrieves customer information and uses LangChain to suggest a recipe for a specific customer, such as "Massimo Bottura." This dynamic pairing of data and AI creates a personalized response for each customer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27rwk45kkdm4h26r8dqd.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27rwk45kkdm4h26r8dqd.PNG" alt="Customers in Realtime DB Firebase"&gt;&lt;/a&gt;&lt;br&gt;
In this example we try to create a bot that suggests Menu based on the preferences of the existing customers.&lt;/p&gt;

&lt;p&gt;We have our customer Massimo Bottura who is intolerant to tomatoes, and his favorite dish is &lt;em&gt;Tortellini&lt;/em&gt;, so we are going to propose a Menu tailored around him.&lt;/p&gt;

&lt;p&gt;It is fundamental to pay attention to the fact that the name is completely anonymized in the OpenAI request.&lt;/p&gt;

&lt;p&gt;As We can see the signature of the method&lt;br&gt;&lt;br&gt;
&lt;code&gt;consultant_chef(intolerances, style, ingredients):&lt;/code&gt;&lt;br&gt;
doesn't include the name of the customer or other personal infos that coudl be used to identify him, as age, address, sex, etc.&lt;/p&gt;

&lt;p&gt;We are going to take the information of our customer from the DB but we are not trasfering the personal data nowhere. &lt;/p&gt;

&lt;p&gt;In this example, we try to create a bot that suggests a menu based on the preferences of existing customers.&lt;/p&gt;

&lt;p&gt;It is fundamental to pay attention to the fact that the name is completely anonymized in the OpenAI request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final step: Updating the Recipe entries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The final piece of the puzzle involves storing the generated recipes for future uses. The code pushes the resulting recipe to a separate "/Recipes" database within Firebase. This step completes the process, allowing the system to not only provide recommendations but also archive them for future reference or analysis, saving costs in case a similar request were already done in the past.&lt;/p&gt;

&lt;p&gt;Here the rest of the code :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def consultant_chef(intolerances, style, ingredients):
    '''
    INPUTS:
        intolerances: The food that the client can't eat
        style: The target style that the client prefer, included if vegetarian
        ingredients: The mandatory ingredients that the client want
    '''

    system_template = "You are an experienced cooking michelin chef"
    system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)

    human_template = "For a customer, I would to prepare a meal in a {style} style meal. " \
                     "The recipe should consider that the customer have {intolerances} intolerances and want {ingredients} as ingredients."
    human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

    chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

    request = chat_prompt.format_prompt(intolerances=intolerances, style=style, ingredients=ingredients).to_messages()

    chat = ChatOpenAI()
    result = chat(request)
    return result.content


def fillDB():
    ref = db.reference("/Customers")

    ref.push().set({
        "Name": "Massimo Bottura",
        "intolerances": "pomodoro",
        "style": "traditional emiliana cuisine",
        "ingredients": "tortellini"

    })
    return ref

def getDB():
    ref = db.reference("/Customers")
    return ref

# here the result as a Recipe is stored to
def pushFinalRecipe(result_recipe, ingredients):
    ref = db.reference("/Recipes")

    ref.push().set({
            "Title": "Menu per " + customerName,
            "Ingredients": ingredients,
            "Instructions" : result_recipe
    })

# get the customers that need to come for dinner
customer_for_dinner = getDB().get()

ingredients = ""
def createRecipe(name):
    for key, value in customer_for_dinner.items():
        if (value["Name"] == name):
            ingredients = value["ingredients"]
            result_recipe = consultant_chef(value["intolerances"], value["style"], value["ingredients"])
result_recipe = createRecipe(customerName)

pushFinalRecipe(result_recipe, ingredients)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Unlocking Insights for Businesses&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This code serves as a practical demonstration of how businesses can utilize Firebase and LangChain to manage customer data efficiently, generate personalized insights, and enhance user experiences. By leveraging the power of AI, companies can turn raw data into actionable recommendations, creating a seamless connection between technology and the culinary world.&lt;/p&gt;

&lt;p&gt;Breaking the barrier of the privacy problem, in this way we can use the full potential of the ChatGPT model, without the need to share the confidential data inside the company, no matter their importance, no sharing at all. &lt;/p&gt;

&lt;p&gt;In conclusion, this article highlights the synergy between Firebase and LangChain in the realm of culinary consultations. The provided code offers a practical guide for tech-savvy individuals, showcasing the potential of data-driven AI applications.&lt;/p&gt;

</description>
      <category>openai</category>
      <category>python</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
    <item>
      <title>LangChain: Not only ChatGPT, but how to use OpenAI to leverage your business</title>
      <dc:creator>marcomaggiotti</dc:creator>
      <pubDate>Mon, 06 Nov 2023 14:48:35 +0000</pubDate>
      <link>https://dev.to/marcomaggiotti/langchain-not-only-chatgpt-but-how-to-use-openai-to-leverage-your-business-5h36</link>
      <guid>https://dev.to/marcomaggiotti/langchain-not-only-chatgpt-but-how-to-use-openai-to-leverage-your-business-5h36</guid>
      <description>&lt;p&gt;&lt;strong&gt;What you will find here :&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An introduction to Langchain with examples.&lt;/li&gt;
&lt;li&gt;The concept of how to add value to your hidden data and documentation.&lt;/li&gt;
&lt;li&gt;An overview of what GenAI can do nowadays.&lt;/li&gt;
&lt;li&gt;Only prompt engineering will be used in this article with Langchain; the Agents, Chains, and Memory will be covered in the next ones.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we can easily see the open source models of &lt;a href="https://www.investopedia.com/large-language-model-7563532"&gt;LLMs&lt;/a&gt; ( Large Language Models) are invading the market and the tech world, the question is : &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IS YOUR BUSINESS READY TO ADOPT AND USE OPEN LLM MODELS?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When your competitors leverage the potential of the data they own, you really don't want to fall behind – believe me. So, I've written this for you.&lt;br&gt;
I've written this article to highlight the potential of LangChain and provide an overview of what is available in the market, additionally, I aimed to democritize the concept and educate people about AI tools.&lt;/p&gt;

&lt;p&gt;As I've mentioned in my previous articles, I prefer a practical approach over excessive blablabla. Here, you'll find a brief description of what LangChain is, but feel free to jump straight to the code if you prefer :D I don't mind. &lt;/p&gt;

&lt;p&gt;Those contents are nothing more than what you can find easily in the net and in the original &lt;a href="https://www.langchain.com/"&gt;website&lt;/a&gt; or in the amazing course of Andrew NG &lt;a href="https://learn.deeplearning.ai/"&gt;DeepLearning.ai&lt;/a&gt; where I took several ideas and snippets.&lt;br&gt;
There are countless articles repeating the same information, but my goal is to provide knowledge that you won't easily find elsewhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is LangChain?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;LangChain is a framework like a toolbox for computer programmers who work with artificial intelligence and language models. It helps them use big language models in their applications by connecting them to other data sources. This makes it easier to create apps that understand and use human language.&lt;/p&gt;

&lt;p&gt;If you know how to program in Python, JavaScript, or TypeScript, you can use LangChain to leverage the potential of your applications. Has been offered to the open-source community, so is free to use and modify.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is LangChain important?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;LangChain is like a magic wand for programmers who make apps that can talk and write like humans. It makes it simple to connect big language models to lots of data, making it easy for the apps to learn from the latest information. For example, if you have a talking AI, it can stay up to date with current events because of LangChain.&lt;/p&gt;

&lt;p&gt;First of all you need to get the secret key from openAI :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yUvf9QmQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8n1buk9hlm8c8a9rdg19.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yUvf9QmQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8n1buk9hlm8c8a9rdg19.PNG" alt="OpenAi API KEY" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's put our hands on the code :&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;We need to install the openai in the PyCharm Terminal :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tittoken is used by OpenAi :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install tiktoken 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://pypi.org/project/python-dotenv/"&gt;To load environmental variables&lt;/a&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install python-dotenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fun fact: I had issues installing dotenv using PyCharm. So, if you encounter the same problem, you may need to change the interpreter to match the one on your local machine. This issue could be related to file permissions, although I wasn't able to investigate it thoroughly.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;n.b.: &lt;br&gt;
To change the interpreter in PyCharm you need to go&lt;br&gt;
File-Settings-Project-Python Interpreter&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install langchain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import openai
from dotenv import load_dotenv, find_dotenv

from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory

os.environ['OPENAI_API_KEY'] = 'YOUR-OPENAI-KEY'

# Normally you need to have a ".env" file with all the desired
# parameters, but for making it simple I put it here
_ = load_dotenv(find_dotenv()) # read local .env file

openai.api_key = os.environ['OPENAI_API_KEY']

# account for deprecation of LLM model
import datetime
# Get the current date
current_date = datetime.datetime.now().date()

# Define the date after which the model should be set to "gpt-3.5-turbo"
target_date = datetime.date(2024, 6, 12)

# Set the model variable based on the current date
if current_date &amp;gt; target_date:
    llm_model = "gpt-3.5-turbo"
else:
    llm_model = "gpt-3.5-turbo-0301"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The Prompt Templates&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The fundamental concept of LangChain is the &lt;em&gt;orchestration&lt;/em&gt; of the prompt templates, models, agents and memory. Here we are going to &lt;br&gt;
show prompt templates including their management and optimization.&lt;/p&gt;

&lt;p&gt;As a first example I'd like introduce you to Chat prompt templates that take a list of chat messages as an input. Each chat message is associated with a role (e.g, AI, Human or System).&lt;/p&gt;

&lt;p&gt;ChatPromptTemplate is pre-made starting conversation that direct the interaction beetween the bot and user, with this you can have a well-organized chat with the AI model. We can choose the topic, area of discussion, environment and more to increase the engagement and ensure that the conversation with the chatbot remains as meaninfull, engaging, and helpful as possible.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It's like having a roadmap for your conversation&lt;/em&gt;, making it simple steer, maintain the discussion in the right direction. In LangChain, we use message prompt templates to create and deal with interactions, so we can make the most of what the chat model can do. &lt;/p&gt;

&lt;p&gt;System and Human prompts differ in their roles and purposes when interacting with chat models. SystemMessagePromptTemplate provides initial instructions, context, or data for the AI model, while HumanMessagePromptTemplate are messages from the user that the AI model responds to.&lt;/p&gt;

&lt;p&gt;Let's check a Use Case App, like a peaceful translator for angry Tripadvisor comments :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import openai

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.environ['OPENAI_API_KEY']

from langchain.chat_models import ChatOpenAI

import datetime
# Get the current date
current_date = datetime.datetime.now().date()

# Define the date after which the model should be set to "gpt-3.5-turbo"
target_date = datetime.date(2024, 6, 12)

# Set the model variable based on the current date
if current_date &amp;gt; target_date:
    llm_model = "gpt-3.5-turbo"
else:
    llm_model = "gpt-3.5-turbo-0301"

# You can play with the temperature value later once 
# you will have a good result, we can talk how too approach 
# improvements in neural networks and AI in the next tutorials
chat = ChatOpenAI(temperature=0.0, model=llm_model)

# ChatPromptTemplate
from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate

customer_style = """American english \
in a peaceful tone as a five stars receptionist
"""

template_string = """ 
Translate the text \
into a style that is {style}, replace the words and sentences that could appear offensive : \
{text}
"""

prompt_template = ChatPromptTemplate.from_template(template_string)
prompt_template.messages[0].prompt
prompt_template.messages[0].prompt.input_variables

customer_email = """
Do not stay in this absolute dump of a hotel, and I use the term hotel very loosely. 
I cannot convey strongly enough how disgusting this place is.blood stained headboards 
that have clearly been up since the world war (the first one), rude staff, windows that 
won't close, no hot water, broken furniture, dirty utensils, broken light fixings and 
actual disgusting garbage in the kettle. Pretty sure I'm going to end up with some sort of rash/ 
disease due to sanitation conditions similar to those of a homeless criminal squat. 
In summary...this place is a complete hole.

"""
customer_messages = prompt_template.format_messages(
                    style=customer_style,
                    text=customer_email)

print(type(customer_messages))
print(type(customer_messages[0]))
print(customer_messages[0])

customer_response = chat(customer_messages)

print(customer_response.content)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I am completely aware that the comment is really rude, but it's a real one taken from the internet. I didn't hide the words, except for some that were really offensive, with the purpose of showing the potential and value of this solution : &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As a five-star receptionist, I would not recommend staying at this hotel. While I understand that everyone has different preferences, I personally found the accommodations to be lacking. The headboards appeared to have some stains, and some of the furniture was broken. Additionally, the windows did not seem to close properly, and there were some issues with the hot water. The staff could have been more polite, and some of the utensils were not as clean as I would have liked. Lastly, I did notice some unsanitary conditions, such as a dirty kettle. Overall, I would suggest looking into other options before booking a stay here.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is just a random example but it is evident how the text has been filtered and shaped in a better and more tolerant way, this to avoid unuseful conflicts or polemic for example and focus on the real feedback of the customer.&lt;/p&gt;

&lt;p&gt;In the second example I want to show how is possible to adapt the prompt replacing values to customize the requests. &lt;/p&gt;

&lt;p&gt;I want to have a menu realized by a Michelin star chef :&lt;br&gt;
&lt;code&gt;system_template = "You are an experienced cooking michelin chef"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Three parameter as an Input: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;INPUTS:&lt;br&gt;
        courses: The number of courses that we want in our meal, 5, 7, or 9&lt;br&gt;
        nationality: The target cooking style by nationality&lt;br&gt;
        daysPerWeek: The number of days per week when the restaurant is open&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate

def consultant_chef(courses, nationality, daysPerWeek):
    '''
    INPUTS:
        courses: The number of courses that we want in our meal, 5, 7, or 9
        nationality: The target cooking style by nationality
        daysPerWeek: The number of days per week when the restaurant is open
    '''

    system_template = "You are an experienced cooking michelin chef"
    system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)

    human_template = "I would to prepare a menu with {courses} courses meal. My nationality is {nationality} " \
                     "and I would like a menu for {daysPerWeek} days per week. Every recipes need to cite the exact" \
                     " source, url, name of the recipe, restaurant or chef where the ispiration came from."
    human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

    chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

    request = chat_prompt.format_prompt(courses=courses, nationality=nationality, daysPerWeek=daysPerWeek).to_messages()

    chat = ChatOpenAI()
    result = chat(request)
    return result.content

print(consultant_chef(courses='5', nationality='italian', daysPerWeek='7'))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can think that you can replace the values from the Prompt Template with your own company data. You can have an automation that produce results processing your data and taking insights from.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusions :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We can consider this framework as a tool that can assist our apps and business, it enables us to master the potential of ChatGPT while still allowing us to review the final result and use our business data according to our preferences.&lt;br&gt;
A more complex and exhaustif use case involves to connecting our database and utilizing our local data. In the context of our  Customer Feedback example we can apply it for different type of customers, by age, by job profile, by, etc.&lt;br&gt;
For the menu app we can apply by cooking style, allergies, food intolerances, etc... The possibilities are really counteless, you name it.&lt;br&gt;
Try to envision changing the values used in the code snippets automatically and iteratively, replace them with your own data that are just waiting to be utilized.&lt;/p&gt;

&lt;p&gt;Finally some tips to how prepare a prompt : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;define who will use the chat;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;be clear on the topic you want and the result will be more precise;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;isolate the core of the conversation that need to be engaged, don't be to general and dispersive&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;add customization, variables that can be replaced with every day values and personal information that you don't want to share with chatgpt.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;test it and refactor it, check the final results and adapt it to the most valuable for your business that you can have&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So enjoy the article and please please please, leave a feedback or remark if you need, don't like something or you think that there is something wrong. You can write also rude comments :D I will use the prompts to translate them in a more lovely way &amp;lt;3 &lt;/p&gt;

&lt;p&gt;p.s.: I will add a git repo in the next days, Italian and French version will come, sorry I am on my way to learn German :D&lt;br&gt;
There are already the next articles in draft talking about how to connect real time database, how to extract insight from video, documents as pdf, words, audio and more. So let's keep in contact ;)&lt;/p&gt;

&lt;p&gt;The following sources were used in this articles : &lt;br&gt;
&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad2da2e9-47ff-46df-87f4-59385508c935_1164x1316.png"&gt;Cover&lt;/a&gt;&lt;br&gt;
&lt;a href="https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/"&gt;LangChainDocs&lt;/a&gt;&lt;br&gt;
&lt;a href="https://learn.deeplearning.ai/"&gt;Andrew NG Amazing courses&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.tripadvisor.com/ShowUserReviews-g186394-d1066611-r117941639-Bluebird_Hotel-Newcastle_upon_Tyne_Tyne_and_Wear_England.html"&gt;TripAdvisor Comment used for the test&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>chatgpt</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Activation on Neural Network, a simple approach</title>
      <dc:creator>marcomaggiotti</dc:creator>
      <pubDate>Thu, 20 Jul 2023 08:10:23 +0000</pubDate>
      <link>https://dev.to/marcomaggiotti/activation-on-neural-network-a-simple-approach-b9k</link>
      <guid>https://dev.to/marcomaggiotti/activation-on-neural-network-a-simple-approach-b9k</guid>
      <description>&lt;p&gt;&lt;em&gt;The idea behind creating this new series of posts is to make the understanding of neural networks and artificial intelligence as democratic as possible.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, initially, before discussing the structure of networks and all the "ugly" mathematics behind it, let's start by talking about the activation of a &lt;a href="https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/Neuron/index.html#:~:text=Neural%20Networks%20%2D%20Neuron&amp;amp;text=The%20perceptron%20is%20a%20mathematical,are%20represented%20as%20numerical%20values." rel="noopener noreferrer"&gt;PERCETRON&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I shared the link from Stanford University that explains the perceptron, which, by the way, looks like a page from the 90s. When I did my middle school thesis, I made a much nicer hypertext. ;P&lt;/p&gt;

&lt;p&gt;I want to provide a preamble on how we arrived at simulating a neuron at a digital level, which means that most of the time, scientific discoveries and engineering skills are inspired by things that already exist in nature, long before we even thought about them. We can think of the dream of flying by looking at birds, how we copied whales to design submarines, camouflage suits, and chameleons, and so on. We cannot deny that nature has been and will always be a major source of inspiration for the scientific discoveries we have made and those we will make in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what do we mean by activation?
&lt;/h2&gt;

&lt;p&gt;If we think about activation, we can think of it as a switch, like the one in our living room lamp, a button that turns the light bulb on and off. Going back to the example of nature, just think about how light activates our neurons and wakes us up from sleep. There is no simpler example than that.&lt;/p&gt;

&lt;p&gt;The activation of a neuron, &lt;a href="https://en.wikipedia.org/wiki/Perceptron" rel="noopener noreferrer"&gt;perceptron&lt;/a&gt;, is essentially the mechanism that triggers the output signal. This applies to the perceptron as well as to any layer of a neural network, but this topic will be extensively addressed in future posts.&lt;/p&gt;

&lt;p&gt;An activation, as we mentioned, can be considered a simple switch that influences the output result, in our example, the turning on of the lamp in the lampshade.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F140oiymudtv34ddbg5vz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F140oiymudtv34ddbg5vz.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
source: &lt;a href="https://www.freepik.com/free-photos-vectors/light-switch" rel="noopener noreferrer"&gt;https://www.freepik.com/free-photos-vectors/light-switch&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Naturally, the mechanism of a lamp is simpler than that of a perceptron, although there isn't much of a difference. It is not overly simpler.&lt;/p&gt;

&lt;p&gt;If we think about the lamp's switch, we must imagine many small switches that will affect whether the light bulb will turn on or not. Many switches act as inputs, and the light of the lamp acts as the output.&lt;/p&gt;

&lt;p&gt;Let's say that if enough switches are pushed, the light bulb will turn on, but if not enough switches are pushed, it will remain off.&lt;/p&gt;

&lt;p&gt;There is a threshold that must be reached to trigger the activation. One could say that the outputs of the switches add up, and if this sum is high enough, the light will turn on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rbbtfqhxtcj99hi1hjn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rbbtfqhxtcj99hi1hjn.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
source: &lt;a href="https://en.wikipedia.org/wiki/File:Components_of_neuron.jpg" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/File:Components_of_neuron.jpg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Observing the image of the neuron, we can think of each dendrite as one of our buttons, and if activated/clicked, it can trigger a part of the activation.&lt;/p&gt;

&lt;p&gt;Now, I really want to emphasize this concept and try to provide another example, even at the risk of being banal and repetitive, because I want to be absolutely sure that the message has been conveyed.&lt;/p&gt;

&lt;p&gt;Let's now try to imagine creating a system of interconnected vessels using the structure we used for the lamp buttons.&lt;/p&gt;

&lt;p&gt;That is, we have many glasses that can be filled with water, and these glasses have a tube connected to another glass. When a first glass is filled enough to reach the tube, it starts pouring water into the glass below.&lt;/p&gt;

&lt;p&gt;If the final glass is also filled enough, it will start pouring water. This situation can be represented as the activation of a neural network, where we trigger the water output with distinct and combined input actions, in our example, by adding water to the different containers.&lt;/p&gt;

&lt;p&gt;So, I brought out all my best Paint skills to provide a representation of what I just described.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkphmwocn5xc0ibiiawb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkphmwocn5xc0ibiiawb.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Summing up and trying to keep it as simple as possible, this is what happens in the activation of a perceptron and a neural network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foilno5e0kipg2hxh6ob6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foilno5e0kipg2hxh6ob6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking at the official representation, we can use our imagination and associate our example of glasses with various elements like input layers, weights, activation functions, output layers. These are all topics that will be covered in the upcoming posts of the series. However, it is already interesting to see how they can be associated together.&lt;/p&gt;

&lt;p&gt;This represents the initial part of how a neural network works. Of course, it is still difficult to relate it to everyday applications such as computer vision, text processing like ChatGPT, etc. However, it is a tangible first step to understand, without too much confusion, how artificial intelligence really works, without too much speculation.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>automation</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>Attivazione nelle Neural Networks</title>
      <dc:creator>marcomaggiotti</dc:creator>
      <pubDate>Wed, 19 Jul 2023 10:06:45 +0000</pubDate>
      <link>https://dev.to/marcomaggiotti/attivazione-nella-neural-network-562e</link>
      <guid>https://dev.to/marcomaggiotti/attivazione-nella-neural-network-562e</guid>
      <description>&lt;p&gt;&lt;em&gt;L'idea di fare questa nuova serie di post è quella di rendere più democratica possibile la comprensione delle reti neurali e dell'intelligenza artificiale.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Quindi inizialmente, prima di parlare della struttura delle reti di tutta la parte "brutta" della matematica che ci sta dietro, possiamo iniziare a parlare dell'attivazione di un &lt;a href="https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/Neuron/index.html#:~:text=Neural%20Networks%20%2D%20Neuron&amp;amp;text=The%20perceptron%20is%20a%20mathematical,are%20represented%20as%20numerical%20values."&gt;PERCETRON&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vi ho condiviso il link della università di Stanford che spiega il percettrone, che tra l'altro sembra una pagina degli anni 90 :D quando feci la mia tesi delle medie, feci un hypertesto molto più carino. &lt;/p&gt;

&lt;p&gt;Voglio fare una premessa del come si é arrivato a simulare un neurone a livello &lt;em&gt;digitale&lt;/em&gt; cioé del fatto che il più spesso delle volte scoperte scientifiche e attitudini ingegneristiche sono inspirate da cose che esistono già in natura, già da molto prima di averci pensato pensato. Possiamo pensare al sogno di volare guardando gli uccelli, a come abbiamo copiato le balene per progettare i sommergibili, le tute mimetiche e camaleonti e cosi via. &lt;br&gt;
Non possiamo nascondere che la natura è stata e sempre sarà fonte di maggiori ispirazioni per le scoperte scientifiche fatte e quelle che faremo in futuro.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quindi cosa intendiamo per attivazione ?
&lt;/h2&gt;

&lt;p&gt;Se dobbiamo pensare ad una activation, ad una attivazione possiamo pensare ad un interruttore, come quello della nostra lampada da salotto, un pulsante che accende e spegne la lampadina. Tornando all'esempio della natura, basta solo pensare al fatto che la luce attiva i nostri neuroni e ci sveglia dal sonno, un esempio più semplice di così non esiste :)&lt;/p&gt;

&lt;p&gt;L'attivazione di un neurone, &lt;a href="https://it.wikipedia.org/wiki/Percettrone"&gt;percettrone&lt;/a&gt;, é praticamente il meccanismo che innesca l'uscita del nostro segnale, del nostro output. Questo vale per il percettrone ma anche per qualunque layer/strato di rete neurale, ma questo argomento verrà affrontato ampiamente nei futuri post.&lt;/p&gt;

&lt;p&gt;Un attivazione, come abbiamo detto può essere considerata come un semplice interruttore, che influisce sul risultato dell'output, nel nostro esempio l'accensione della lampadina nella lampada. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s6K2SEN---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/140oiymudtv34ddbg5vz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s6K2SEN---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/140oiymudtv34ddbg5vz.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
source: &lt;a href="https://www.freepik.com/free-photos-vectors/light-switch"&gt;https://www.freepik.com/free-photos-vectors/light-switch&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Naturalmente il meccanismo della lampada é piú semplicistico del perceptrone, anche se non ci manca molto, non é esageratamente piú semplice. &lt;/p&gt;

&lt;p&gt;Se pensiamo all'interruttore della lampada, dobbiamo immaginare a tanti piccoli interruttori che incideranno sul fatto che la lampadina si accenderà oppure no; Tanti interruttori come input e la luce della lampadina come output.&lt;/p&gt;

&lt;p&gt;Diciamo che se si spingono abbastanza interruttori la luce della lampada si accenderà, invece se non se ne spingono abbastanza rimarrà spenta.&lt;/p&gt;

&lt;p&gt;C'é una soglia che deve essere raggiunta per poter scatenare l'attivazione, si potrebbe dire le uscite degli interruttori si sommano tra di loro e se questa somma é abbastanza alta, la luce si accenderà.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BY-Ws7-K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2rbbtfqhxtcj99hi1hjn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BY-Ws7-K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2rbbtfqhxtcj99hi1hjn.png" alt="Image description" width="800" height="552"&gt;&lt;/a&gt;&lt;br&gt;
source: &lt;a href="https://en.wikipedia.org/wiki/File:Components_of_neuron.jpg"&gt;https://en.wikipedia.org/wiki/File:Components_of_neuron.jpg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Osservando l'immagine del neurone possiamo pensare che ogni Dendrite sia uno dei nostri pulsanti e se attivato/cliccato allora possa innescare una parte dell'attivazione.&lt;/p&gt;

&lt;p&gt;Ora vorrei veramente insistere su questo concetto e provare a fare anche un altro esempio assumendo il rischio di essere banale e pesante, perché voglio essere veramente sicuro che sia passato il messaggio.&lt;/p&gt;

&lt;p&gt;Proviamo ora a pensare di fare un sistema di vasi comunicanti nella struttura che abbiamo usato per i bottoni della lampada.&lt;/p&gt;

&lt;p&gt;Cioé abbiamo tanti bicchieri che possiamo riempire d'acqua, questi bicchieri hanno un tubo che é collegato ad un altro bicchiere. Quando un primo bicchiere é abbastanza pieno fino a raggiungere il tubo allora iniziano a travasare l'acqua nel bicchiere sottostante.&lt;/p&gt;

&lt;p&gt;Se il bicchiere finale a sua volta é abbastanza pieno inizierà a travasare acqua, ecco questa situazione può essere rappresentata come una attivazione di una rete neurale, cioé scateniamo l'uscita dell'acqua con delle distinte e composte azioni in ingresso, nel nostro esempio immettendo acqua nei diversi recipienti.&lt;/p&gt;

&lt;p&gt;Ecco ho tirato fuori tutte le mie migliori skills di Paint :D per proporvi una rappresentazione di quello che ho appena descritto.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RglVU1wC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkphmwocn5xc0ibiiawb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RglVU1wC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkphmwocn5xc0ibiiawb.jpg" alt="Image description" width="788" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Riassumendo e cercando di restare il più semplici possibile questo é quello che avviene in una attivazione di un percettrone e di una rete neurale.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vt6SzBDx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oilno5e0kipg2hxh6ob6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vt6SzBDx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oilno5e0kipg2hxh6ob6.png" alt="Image description" width="667" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Guardando la rappresentazione ufficiale possiamo giocare un poco con l'immaginazione ed associare il nostro esempio dei bicchieri ai vari elementi come input layers, weights, activation functions, output layers, comunque tutti argomenti che verranno affrontati nei prossimi post della serie, però già é interessante vedere come si possono associare insieme.&lt;/p&gt;

&lt;p&gt;Questo rappresenta la parte iniziale di come funziona una rete neurale, naturalmente é ancora difficile associarlo alle applicazioni di tutti i giorni come computer vision, text processing come chatgpt, etc. Però é un primo passo tangibile per capire senza troppa confusione come funziona veramente l'intelligenza artificiale, senza troppe speculazioni.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Solidity by Examples in ITALIANO!</title>
      <dc:creator>marcomaggiotti</dc:creator>
      <pubDate>Tue, 22 Nov 2022 17:06:09 +0000</pubDate>
      <link>https://dev.to/marcomaggiotti/solidity-by-examples-in-italiano-3gib</link>
      <guid>https://dev.to/marcomaggiotti/solidity-by-examples-in-italiano-3gib</guid>
      <description>&lt;p&gt;Bentornati a tutti, &lt;/p&gt;

&lt;p&gt;mi son chiesto da mesi perché non ci sono tutorial di Solidity abbastanza professionali in Italiano, uno come &lt;a href="https://solidity-by-example.org/"&gt;Solidity By Example&lt;/a&gt; poi mi son detto, forse nessuno ha avuto ancora il tempo di farlo.&lt;/p&gt;

&lt;p&gt;Ed allora qua la soluzione, risultato di mille ricerche accademiche approvate inofficialmente dal CERN di Gubbio, la faccio io e bona lí.&lt;/p&gt;

&lt;p&gt;Non sto a farvi il pippone di Remix &lt;a href="https://remix.ethereum.org/"&gt;https://remix.ethereum.org/&lt;/a&gt; , Truffle, come si usa una testChain, in caso abbiate domande per un ulteriore tutorial su come fare chiedete e vi sarà dato. =)&lt;/p&gt;

&lt;p&gt;Quindi ... &lt;/p&gt;

&lt;h2&gt;
  
  
  Hello World
&lt;/h2&gt;

&lt;p&gt;pragma specifies the compiler version of Solidity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// SPDX-License-Identifier: MIT
// compiler version must be greater than or equal to 0.8.13 
//and less than 0.9.0

// pragma definisce la versione che deve essere usata per la compilazione 

pragma solidity ^0.8.13;

contract HelloWorld {
    string public greet = "Hello World!";
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ora possiamo fare una piccola introduzione per chi non é dentro la programmazione sulle String. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;string public greet = "Hello World!";&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;String sono usate nella maggior parte dei linguaggi come Java, C++, PYthon. Semplicemente String é un gruppo di caratteri, un array di Char, cioé simboli rappresentanti lettere e numeri ma sempre riconosciuti come sequenze di caratteri.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;"Hello World!"&lt;/code&gt; per esempio é una String contenente le due parole HELLO WORLD con un carattere di 'spazio' in mezzo. &lt;/p&gt;

&lt;h2&gt;
  
  
  Ma come é definita String in Solidity ?
&lt;/h2&gt;

&lt;p&gt;Solidity supporta String letteralmente usando double quote "" e Single quote. Fornisce "frasi" come un tipo di dato per dichiarare una variabile di tipo String.&lt;/p&gt;

&lt;h2&gt;
  
  
  ANDIAMO SU REMIX
&lt;/h2&gt;

&lt;p&gt;Bene, bene, bene una volta detto tutto questo possiamo provare a vedere se le svariate righe di codice ( 4 righe ) in Solidity funzionano.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d997WPLr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vg256whywx3xv288lp2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d997WPLr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vg256whywx3xv288lp2z.png" alt="Hello World Remix" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Il primo passo é creare un file chiamato HelloWorld.sol con dentro il codice, é buona pratica usare lo stesso nome della classe, in questo &lt;code&gt;contract&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Compile
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hkV4FEtH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3odh4d6ucz2f38l5nvrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hkV4FEtH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3odh4d6ucz2f38l5nvrg.png" alt="Compile Link" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Right click on the HelloWorld.sol file e clicchiamo su compile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pws77otl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u14bfyavfy18k32w7v3s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pws77otl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u14bfyavfy18k32w7v3s.png" alt="Compiler Menu" width="370" height="621"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nel menu del solidity compiler abbiamo la scelta delle varie versioni del Compiler da utilizzare, quella desiderata viene dichiarata nel contratto &lt;code&gt;pragma solidity ^0.8.13;&lt;/code&gt;, ma una versione piú recente puó comunque funzionare.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5JmH7xug--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2zximcyjoyqm3o77qthi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5JmH7xug--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2zximcyjoyqm3o77qthi.png" alt="Image description" width="370" height="621"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicchiamo su Compile HelloWorld.sol cosí per divertimento anche perché l'abbiamo già compilato nello step precedente :D&lt;/p&gt;

&lt;p&gt;Ma ora possiamo notare che c'é un bollino verde ai piedi dell'icona Solidity Compiler, ci indica giustamente che il nostro contratto é stato compilato senza errori. &lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy
&lt;/h2&gt;

&lt;p&gt;Andiamo sul menu Deploy ora e selezioniamo l'ambiente/environment che desiriamo &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mq4mWdpn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9sjt3gfd8dydsv0nkgv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mq4mWdpn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9sjt3gfd8dydsv0nkgv2.png" alt="Deploy Menu Environment" width="365" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In questo esempio sto usando la TestChain Goerli test Network&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Se usate local test chain con Truffle o HardHat, dovete configurare Metamask sulla vostra chain ma qua non affrontiamo questo argomento.&lt;/p&gt;

&lt;p&gt;Ci connettiamo con MetaMask selezionando nella lista di " Environment " -&amp;gt; " Injected Provider - MetaMask "&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cNdGcLSs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/emz9meez7wjjtjd3lyay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cNdGcLSs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/emz9meez7wjjtjd3lyay.png" alt="Deploy Menu" width="370" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Selezioniamo il nostro account che combacia con quello in uso su MetaMask, il nostro contratto HelloWorld.sol e clicchiamo su " Deploy " &lt;/p&gt;

&lt;p&gt;A questo punto se tutto é stato ben configurato vedremo MetaMask aprirsi e chiedervi la conferma della transazione necessaria al Deploy &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4HJn2PHY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nn13faunsdymkqlmgh09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4HJn2PHY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nn13faunsdymkqlmgh09.png" alt="Image description" width="351" height="670"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cliccando su Confirm finalizzeremo la transazione che inserisce il nostro contratto appena creato nella Test Chain. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fd36gRAh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ntr3hqfxz64chv3xzgme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fd36gRAh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ntr3hqfxz64chv3xzgme.png" alt="Console Confirm Transaction" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Possiamo vedere nella Console che una nuova transazione é stata creata &lt;br&gt;
&lt;code&gt;[block:7999951 txIndex:20]from: 0x1e1...ea4C2to: HelloWorld.(constructor)value: 0 weidata: 0x608...10033logs: 0hash: 0xf3d...738d7&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Che si chiama ovviamente HelloWorld come il nome del nostro contratto, ora il nostro contratto é nella Goerli Test Network e ci rimarrà finche esisterà la Chain.&lt;/p&gt;

&lt;p&gt;Ora possiamo giocare finalmente con il nostro Contratto, nel tab " Deployed Contracts " ,  qui troviamo il nostro contratto presente nella test chain. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6WEiBgGv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x3tzrm9nqlb9dxrb1jgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6WEiBgGv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x3tzrm9nqlb9dxrb1jgy.png" alt="Deployed Contract" width="719" height="950"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cliccando sul bottone " greet " possiamo interagire con la Blockchain, questa é la nostra Stringa che abbiamo reso pubblica e quindi raggiungibile apertamente nella chain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CALL
[call]from: 0x1e1Acaec81E5E5E4E2A77EAC1b73D475C43ea4C2to: HelloWorld.greet()data: 0xcfa...e3217
from    0x1e1Acaec81E5E5E4E2A77EAC1b73D475C43ea4C2
to  HelloWorld.greet() 0x3cF36BBD8a0BfCf22da2B3Da1bC88dC1250B596E
input   0xcfa...e3217
decoded input   {}
decoded output  {
    "0": "string: Hello World!"
}
logs    []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Questa chiamata va appunto a recuperare la variabile  &lt;code&gt;string public greet&lt;/code&gt; ritornandoci la stringa &lt;strong&gt;Hello World!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>beginners</category>
      <category>solidity</category>
      <category>smartcontract</category>
    </item>
    <item>
      <title>Respeaker 4 Mic Review - ITA - PARTE 2</title>
      <dc:creator>marcomaggiotti</dc:creator>
      <pubDate>Thu, 07 Jul 2022 10:16:26 +0000</pubDate>
      <link>https://dev.to/marcomaggiotti/respeaker-4-mic-review-ita-parte-2-39h9</link>
      <guid>https://dev.to/marcomaggiotti/respeaker-4-mic-review-ita-parte-2-39h9</guid>
      <description>&lt;p&gt;Eccoci allora alla parte dell'installazione librerie e primi esperimenti, pronti ?! Dai che ci do, che ci do, che ci do di codice :) &lt;/p&gt;

&lt;p&gt;Inizialmente facciamo le installazioni di rito per aggiornare i vari package ed installare le audio libs.&lt;/p&gt;

&lt;p&gt;Quindi aprite il terminale a fate il solito noioso &lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update&lt;/code&gt; che comando noioso...&lt;/p&gt;

&lt;h2&gt;
  
  
  Installiamo i drivers, librerie e configurazioni
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
git clone https://github.com/respeaker/seeed-voicecard.git
cd seeed-voicecard
sudo ./install.sh
sudo reboot now
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ripeto non mi sto inventando niente, sto solo facendo una review del codice. &lt;br&gt;
Potete trovare le stesse informazioni qua &lt;a href="https://wiki.seeedstudio.com/ReSpeaker_4_Mic_Array_for_Raspberry_Pi/" rel="noopener noreferrer"&gt;ReSpeaker_4_Mic_Array_for_Raspberry_Pi&lt;/a&gt; solo che da me é piú divertente :D e magari vi risparmio la fatica di controllare se i repository son vecchi o mancano delle librerie o bisogna fare degli update/aggiornamenti. ( E poi diciamoci la verità io sono piú bravo :D ;p )&lt;/p&gt;

&lt;p&gt;Scusate per la diversione ed ora passiamo alla config :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo raspi-config
# Seleziona 1 System options
# Seleziona S2 Audio
# Seleziona la tua dispositivo di uscita Audio preferita 
# Seleziona Finish
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Beh naturalemnte se avete impostato il Raspberry in Italiano la config sarà in Italiano ;P&lt;/p&gt;

&lt;p&gt;Allora qua cosa succede ? Possiamo scegliere tra le porte che il nostro Raspberry ha a disposizione :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzgc3i8l3druou88vbrs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzgc3i8l3druou88vbrs.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;il mio vecchio Raspberry 2 ha un classico HDMI ed un Jack, ma... &lt;br&gt;
Diciamo che la porta HDMI puó essere utile per fare i primi test o per applicazioni particolari ma dedicare un monitor solo per l'audio é un pó esagerato.&lt;br&gt;
E poi il Jack... cioé che rimanga tra di noi ma chi é che usa la presa Jack nel 2022 ?! Dai su... che boomers. Che non me ne abbiano i Boomers smanettoni, sto scherzando e poi anche io sono ormai un Boomer :D&lt;/p&gt;

&lt;p&gt;Quindi l'idea é di attaccarci la cassa bluetooth e non é che la mia sia di ultima generazione ma almeno posso usarla in maniera piú agile e pratica visto che é senza filo. &lt;/p&gt;

&lt;p&gt;Quindi visto che era già configurata l'aggiungo : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6znwsj3q32sfoqcedn1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6znwsj3q32sfoqcedn1.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Allora io l'avevo già configurata prima e sinceramente non mi ricordo come o se avesse bisogno di qualche tool particolare, quindi se stai provando ti prego di farmi sapere se hai avuto qualche problema :) Scrivimi su Linkedin senza problema. &lt;/p&gt;

&lt;p&gt;Ok ora andiamo al pezzo forte cioé provare come funziona il software messo a disposizione per il ReSpeaker.&lt;/p&gt;
&lt;h2&gt;
  
  
  Sound check e test dell'installazione
&lt;/h2&gt;

&lt;p&gt;Facciamo un piccolo check della scheda audio :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;arecord -L&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;che deve ritornare una cosa del genere :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;null
    Discard all samples (playback) or generate zero samples (capture)
jack
    JACK Audio Connection Kit
pulse
    PulseAudio Sound Server
default
playback
ac108
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Allora la documentazione di SeedStudio ti consiglia di utilizzare Audacity per fare dei test audio ma io lo sconsiglio altamente per via delle performance del nostro "device" che é piccolino e non riesce a gestire troppe applicazioni. &lt;br&gt;
Io uso un Raspberry 3 B+ per la cronaca.&lt;/p&gt;

&lt;p&gt;Quindi questo step lo salto a piè pari ma se volete e siete sadomasochisti potete sempre reperire i passaggi nel link che ho messo prima e che lascieró a piè pagina :)&lt;/p&gt;
&lt;h2&gt;
  
  
  Primo Test
&lt;/h2&gt;

&lt;p&gt;Ora proviamo a registrare un suono e salvarlo come wav per poi riprodurlo :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install sox                             //per convertire l'audio
arecord -Dac108 -f S32_LE -r 16000 -c 4 hello.wav    // supporta solamente 4 canali
sox hello.wav -c 2 stereo.wav                        // converte in stereo
aplay stereo.wav                                      // va sul device di default
                                                     // L'audio andrà nel mio caso nella cassa bluetooth

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Vedrete che potrete riprodurre l'audio appena registrato con il commando &lt;strong&gt;aplay&lt;/strong&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusioni
&lt;/h2&gt;

&lt;p&gt;Eccoci alla fine della seconda parte del tutorial, allora abbiamo capito come installare i driver, configurare l'uscita audio, ovviare ad alcuni problemi tecnici e fare i primi test. &lt;br&gt;
Nei prossimi tutorial proveremo a giocare con il codice, analizzare gli scripts messi a disposizione ed in un futuro anche non troppo remoto connetterci sul cloud e fare qualche giochino simpatico e pratico :)&lt;/p&gt;

&lt;p&gt;p.s. : vi ricordo sempre che la fonte da cui prendo le informazioni é : &lt;a href="https://wiki.seeedstudio.com/ReSpeaker_4_Mic_Array_for_Raspberry_Pi/" rel="noopener noreferrer"&gt;https://wiki.seeedstudio.com/ReSpeaker_4_Mic_Array_for_Raspberry_Pi/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Buon turbo coding smanettamento a tutti !&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Respeaker 4 Mic Review - ITA - PARTE 1</title>
      <dc:creator>marcomaggiotti</dc:creator>
      <pubDate>Mon, 27 Jun 2022 07:23:45 +0000</pubDate>
      <link>https://dev.to/marcomaggiotti/respeaker-4-mic-review-ita-parte-1-2l9g</link>
      <guid>https://dev.to/marcomaggiotti/respeaker-4-mic-review-ita-parte-1-2l9g</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WXsG1u5L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zv69nezw8zv7p1ml6ml.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WXsG1u5L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zv69nezw8zv7p1ml6ml.jpeg" alt="Image description" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2E_Jp0UO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wf0hqgncko2sn18uw7si.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2E_Jp0UO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wf0hqgncko2sn18uw7si.jpeg" alt="Image description" width="800" height="714"&gt;&lt;/a&gt;&lt;br&gt;
Ho comprato un simpatico Hat del Raspberry che volevo guardare insieme a voi, il ReSpeaker 4-mic Array.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Di cosa si tratta ? :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Una carta d’espansione quad-microphone per Raspberry PI designed, progettata interamente con lo scopo di servire applicazion di AI e Voice Recognition, WHAT !?!?! R U SERIOUS ?! ABSOLUTELY !!!&lt;/p&gt;

&lt;p&gt;Cosa vuol dire questo ? Che non é un microfono che puoi prendere da qualunque parte che serve per tutti gli usi, che puoi usare per il tuo pc, per fare streaming o recording; no, se vuoi fare applicazioni dedicate al riconoscimento vocale su Raspberry, questo prodotto é dedicato e costruito per questo specifico utilizzo. &lt;br&gt;
Cioé dalle informazioni del produttore si evince che puó essere uno strumento molto piú potente di Alexa e Google Assistant.&lt;br&gt;
Da un certo punto di vista potrebbe dare un vantaggio di capacità già in partenza, comunque questo ci dice che se vogliamo spingere sulle performance questo potrebbe essere lo strumento giusto, in seguito vedremo il perché ed anche il vantaggio software. &lt;br&gt;
Un piccolo punto di vista comunque é che se volete provare a giocare un pó con il riconoscimento vocale, vi consiglio prima di sperimentare un normale microfono usb, anche sul vostro pc per poi salire di livello, riassumendo non fate come me che ho la pessima abitudine di comprare prodotti già avanzati per l’utilizzo che devo farci, che penso che mi serviranno ma che poi restano nel cassetto per anni :D&lt;/p&gt;

&lt;p&gt;Comunque non preoccupatevi faró piú avanti dei tutorial su come prototipare applicazioni su PC per poi passarle su Raspberry con le dovute cautele ed appunto preparare un “pacchetto” che sia facile da installare poi su un dispositivo meno performante del pc, come il Raspberry appunto.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Torniamo alle nostre specifiche :&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AC108&lt;/li&gt;
&lt;li&gt;Quad-channel ADC con IS2/TDM per ascolto in un raggio di 3 metri&lt;/li&gt;
&lt;li&gt;Anello di led con 12 APA102 led programmabili&lt;/li&gt;
&lt;li&gt;Grove interface I2C connessa con I2C-1 &lt;/li&gt;
&lt;li&gt;Grove interface porta digitale connessa con GPI012/13&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cosa vogliono dire tutte queste sigle ? Non lo so neanche io quindi non preoccupatevi, dobbiamo usarlo, non assemblarlo , giusto ?! Il vantaggio di conoscere i componenti potrebbe venire utile in una fase avanzata dei progetti quando si vorrà approfondire le librerie o magari quando si avranno dei problemi ed errori e capire come sistemarli “FIXARLI”.&lt;/p&gt;

&lt;p&gt;Quello che possiamo dire é che questa carta di permette di fare con il software algorithm già preinstallato tante cose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Il software algorithm ci permette di :&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VAD ( Voice Activity Detection ) Riconoscimento di attività vocali&lt;/li&gt;
&lt;li&gt;DOA ( Direction of Arrival ) Direzione di arrivo vocale :O&lt;/li&gt;
&lt;li&gt;KWS ( Keyword Search ) identificazione parole ed indicazione della direzione con il LED ring.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Uff... che bomba !!! Insomma qua abbiamo tutto quello che ci serve per sperimentare nuove applicazioni e giocarci come se non ci fosse un domani.&lt;/p&gt;

&lt;p&gt;Fonte : &lt;a href="https://wiki.seeedstudio.com/ReSpeaker_4_Mic_Array_for_Raspberry_Pi/https://wiki.seeedstudio.com/ReSpeaker_4_Mic_Array_for_Raspberry_Pi/"&gt;https://wiki.seeedstudio.com/ReSpeaker_4_Mic_Array_for_Raspberry_Pi/https://wiki.seeedstudio.com/ReSpeaker_4_Mic_Array_for_Raspberry_Pi/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
