DEV Community

0x2e73
0x2e73

Posted on

๐Ÿš€ Project Journey #1: Picking the Tech Stack & Diving into AI ๐Ÿค–

Hey everyone! ๐Ÿ‘‹

Today marks the beginning of my journey to build an AI-powered app that can analyze contracts and help users understand how risky they might be. ๐ŸŒโš–๏ธ

Iโ€™m a 4th-year apprentice developer, so my main goal with this project is to learn by doing (and hopefully not break too many things in the process ๐Ÿ˜…). So, let's jump into what I've been up to today!

๐Ÿ› ๏ธ Choosing the Tech Stack

After a lot of research (and a few cups of coffee โ˜•), Iโ€™ve finally decided on the tech stack Iโ€™ll be using for this project:

Backend:

  • Python (with Flask ๐Ÿš€) for the API
    • Why Flask? It's lightweight, super easy to get started with, and perfect for building quick APIs to serve our AI models. Today, I learned how to set up a basic Flask API endpoint, and honestly... it was way easier than I expected!

๐Ÿ’ก Frontend:

  • Next.js (React framework) for the UI ๐ŸŒ
    • Next.js will allow us to build a fast and SEO-friendly frontend. It's got server-side rendering, which means faster load times and better performance. ๐Ÿ“ˆ

AI Model:

  • Python (again, because who doesnโ€™t love Python, right? ๐Ÿ)
    • The AI will be using NLP (Natural Language Processing) models. Specifically, Iโ€™m looking into using BERT or GPT-like models from the HuggingFace library. These models are like super-smart language nerds that can understand and analyze human text. ๐Ÿค“

๐Ÿš€ Understanding Transformers and NLP Models ๐Ÿค–

๐Ÿ” What Are Transformers? ๐Ÿค–

Transformers are a type of deep learning model that revolutionized the field of NLP. They were introduced in a landmark paper called "Attention is All You Need" by Vaswani et al. in 2017. Unlike older models like RNNs (Recurrent Neural Networks) or LSTMs (Long Short-Term Memory networks), transformers are highly efficient at processing long sequences of text in parallel, which makes them both faster and more accurate.

๐Ÿง  Key Concept: Attention Mechanism

The secret sauce behind transformers is the attention mechanism. Attention allows the model to focus on the most relevant parts of the input text, regardless of its position in the sequence. Think of it like reading a contract โ€” instead of reading every single word, your brain automatically zooms in on the important parts, like "hidden fees" or "data sharing." ๐Ÿ•ต๏ธโ€โ™‚๏ธ

Here's how it works in simple terms:

  1. Understanding Context: Attention helps the model understand the relationship between different words in a sentence, even if they're far apart.

    • Example: In the sentence, "The cat, which was very hungry, finally ate the food," the model knows that "ate" is related to "cat" even though there are many words in between.
  2. Calculating Attention Scores: The model assigns an attention score to each word in the sequence. Higher scores mean more relevance to the task at hand.

    • If you're analyzing a contract for risky clauses, the model might give high scores to words like "penalty," "termination," or "data sharing."

๐Ÿงฎ A Quick Peek at the Math

Each word in the input is transformed into a vector (a list of numbers) using something called word embeddings. The model then uses three vectors for each word:

  • Query (Q): What word are we focusing on?
  • Key (K): How important is this word to other words?
  • Value (V): The actual meaning of the word.

These vectors are used to calculate the attention scores, which determine how much focus should be given to each word in the sentence.

The formula looks like this:

Image description

Where:

  • ( QK^T ) is the dot product of the query and key vectors.
  • ( d_k ) is the dimension of the key vector (used to scale the values).
  • Softmax is a function that converts scores into probabilities.

๐Ÿ“š How NLP Models Like BERT and GPT Use Transformers

Transformers are the backbone of popular NLP models like BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer). Here's a breakdown of each:

๐Ÿ”Ž BERT (Bidirectional Encoder Representations from Transformers)

  • Purpose: BERT is great for understanding the context of words in both directions (left-to-right and right-to-left). This makes it perfect for tasks like text classification, question answering, and contract analysis.
  • Architecture: BERT is made up of encoders only, which means it's focused on understanding the input text deeply.
  • Training: Itโ€™s pre-trained on a huge amount of text data with two tasks:
    1. Masked Language Modeling: Predicting missing words in a sentence.
      • Example: "The [MASK] is in the garden" โ†’ "The cat is in the garden."
    2. Next Sentence Prediction: Determining if one sentence logically follows another.
      • Example: "He opened the door." โ†’ "He walked into the room." (Yes) vs. "She went to the store." (No)

โœ๏ธ GPT (Generative Pre-trained Transformer)

  • Purpose: GPT is designed for generating text. It's excellent for tasks like text completion, content creation, and even conversational AI.
  • Architecture: GPT uses decoders only, which means it's focused on generating new text based on given input.
  • Training: GPT is trained on a vast dataset to predict the next word in a sentence.
    • Example: "Once upon a time..." โ†’ "Once upon a time, there was a brave knight."

Key Difference:

  • BERT is bidirectional (understands the full context).
  • GPT is unidirectional (predicts the next word based on past context).

๐Ÿ› ๏ธ How to Build Your AI for Contract Analysis

1. Data Preparation

  • Collect and clean a dataset of contracts.
  • Label the contracts with different levels of danger (like your CSV dataset).

2. Model Selection

  • For contract analysis, BERT or a fine-tuned version like RoBERTa could be a good fit because itโ€™s great at understanding context.
  • Use Hugging Face Transformers library to access these models.

3. Training the Model

   from transformers import BertTokenizer, BertForSequenceClassification, Trainer, TrainingArguments
   from datasets import load_dataset

   # Load dataset
   dataset = load_dataset('csv', data_files='contracts.csv')

   # Tokenize data
   tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
   def tokenize_function(example):
       return tokenizer(example['text'], truncation=True)
   tokenized_datasets = dataset.map(tokenize_function, batched=True)

   # Load pre-trained BERT model
   model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=5)

   # Set up training arguments
   training_args = TrainingArguments(
       output_dir='./results',
       evaluation_strategy='epoch',
       learning_rate=2e-5,
       per_device_train_batch_size=16,
       num_train_epochs=3
   )

   # Train model
   trainer = Trainer(
       model=model,
       args=training_args,
       train_dataset=tokenized_datasets['train'],
       eval_dataset=tokenized_datasets['test']
   )
   trainer.train()
Enter fullscreen mode Exit fullscreen mode

Dont worry, the real code will be better xD

Thank you for reading this article, dont hesitate to give me some advice, and to like this post if you liked it !

0x2e73

Top comments (0)