Feburary 13th | 2 PM EST
On the one hand, we all want to optimize our LLMs, but fine-turning, much less training, is a resource-intensive and expensive task. This webinar will dive into techniques, methodologies, and best practice approaches to LLM optimization without any fine-tuning involved.
Key topics
- Prompt optimization and evals: TDD basics for LLMs, 3 paths to evals, and all things synthetic.
- Optimization with production insights: Tuning vs. optimization, RLHF, and advanced RAG optimization techniques such as self-querying, contextual compression, and parent and child chunking.
- LLM architectures, deployments, and impacts on optimization: Model pruning, quantization, semantic caching, and edge.