- Loss metric: Measures how wrong a model's predictions are. Lower loss is better.
- Cosine distance of 0: Indicates two embeddings are similar in direction.
- RAG (Retrieval Augmented Generation): Uses external information to improve text generation.
- String prompt templates: Can use any number of variables.
- Retrievers in LangChain: Retrieve relevant information from knowledge bases.
- Indexing in vector data: Maps vectors for faster searching.
- Accuracy: Measures correct predictions out of total predictions.
- Keyword-based search: Evaluates documents based on keyword presence and frequency.
- Soft prompting: When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training.
- Greedy decoding: Selects the most probable word at each step in text generation.
- T-Few fine-tuning: Updates only a fraction of model weights.
- LangChain: Python library for building LLM applications.
- Prompt templates: Use Python's str.format syntax for templating.
- RAG Sequence model: Retrieves multiple relevant documents for each query.
- Temperature in decoding: Influences probability distribution over vocabulary.
- LLM in chatbot: Generates linguistic output.
- Chain interaction with memory: Before and after chain execution. 18. Challenge with diffusion models for text: Text is not categorical.
- Vector databases vs. relational databases: Based on distances and similarities.
- StreamlitChatMessageHistory: Stores messages in Streamlit session state, not persisted.
- Semantic relationships in vector databases: Crucial for LLM understanding and generation.
- Groundedness vs. Answer Relevance: Groundedness focuses on factual correctness, Answer Relevance on query relevance.
- Fine-tuning vs. PEFT: Fine-tuning trains the entire model, PEFT updates a small subset of parameters.
- Fine-tuning appropriateness: When LLM doesn't perform well and prompt engineering is insufficient.
The Future of AI, LLMs, and Observability on Google Cloud
Datadog sat down with Google’s Director of AI to discuss the current and future states of AI, ML, and LLMs on Google Cloud. Discover 7 key insights for technical leaders, covering everything from upskilling teams to observability best practices
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (0)