DEV Community

Michael Anderson
Michael Anderson

Posted on

GENERATIVE AI SUMMARY

  1. Loss metric: Measures how wrong a model's predictions are. Lower loss is better.
  2. Cosine distance of 0: Indicates two embeddings are similar in direction.
  3. RAG (Retrieval Augmented Generation): Uses external information to improve text generation.
  4. String prompt templates: Can use any number of variables.
  5. Retrievers in LangChain: Retrieve relevant information from knowledge bases.
  6. Indexing in vector data: Maps vectors for faster searching.
  7. Accuracy: Measures correct predictions out of total predictions.
  8. Keyword-based search: Evaluates documents based on keyword presence and frequency.
  9. Soft prompting: When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training.
  10. Greedy decoding: Selects the most probable word at each step in text generation.
  11. T-Few fine-tuning: Updates only a fraction of model weights.
  12. LangChain: Python library for building LLM applications.
  13. Prompt templates: Use Python's str.format syntax for templating.
  14. RAG Sequence model: Retrieves multiple relevant documents for each query.
  15. Temperature in decoding: Influences probability distribution over vocabulary.
  16. LLM in chatbot: Generates linguistic output.
  17. Chain interaction with memory: Before and after chain execution. 18. Challenge with diffusion models for text: Text is not categorical.
  18. Vector databases vs. relational databases: Based on distances and similarities.
  19. StreamlitChatMessageHistory: Stores messages in Streamlit session state, not persisted.
  20. Semantic relationships in vector databases: Crucial for LLM understanding and generation.
  21. Groundedness vs. Answer Relevance: Groundedness focuses on factual correctness, Answer Relevance on query relevance.
  22. Fine-tuning vs. PEFT: Fine-tuning trains the entire model, PEFT updates a small subset of parameters.
  23. Fine-tuning appropriateness: When LLM doesn't perform well and prompt engineering is insufficient.

Image of Datadog

The Future of AI, LLMs, and Observability on Google Cloud

Datadog sat down with Google’s Director of AI to discuss the current and future states of AI, ML, and LLMs on Google Cloud. Discover 7 key insights for technical leaders, covering everything from upskilling teams to observability best practices

Learn More

Top comments (0)

Billboard image

The Next Generation Developer Platform

Coherence is the first Platform-as-a-Service you can control. Unlike "black-box" platforms that are opinionated about the infra you can deploy, Coherence is powered by CNC, the open-source IaC framework, which offers limitless customization.

Learn more