F5 Community Training & Labs Source | Edit on
Secure, Deliver and Optimize GenAI Apps with F5¶
Welcome to the Secure, Deliver and Optimize GenAI Apps with F5
The content contained here leverages a full DevOps CI/CD pipeline and is sourced from the GitHub repository at https://github.com/f5devcentral/secure-deliver-optimize-genai. Bugs and Requests for enhancements can be made by opening an Issue within the repository.
Contents:
- Introduction - GenAI Practitioner
- Prerequisite
- Class 1: The fundamental of Generative Artificial Intelligent (AI)
- AI Primer
- What is AI and how different with GenAI?
- What is LLM?
- What is SLM?
- What is ML?
- What hallucination means in AI?
- What “token” means in context in AI?
- What is quantization?
- What is Retrival Augmented Generation?
- What is Agentic RAG?
- What is vectorizing in AI?
- What is embedding in AI?
- What is “context windows” in AI?
- What is “temperature” in AI?
- AI Primer
- Class 2: Deploy and Secure a modern application
- Class 3: Architect, build and deploy AI Services
- 1 - Conceptual Architecture of AI Services
- 2 - Deploy Nginx Ingress Controller for AIGW K8S
- 3 - Deploy Open-WebUI with Ollama Service
- 4 - Download Language Model
- 5 - Deploy LLM model service (Ollama)
- 6 - Deploy LLM orchestrator service (Flowise AI)
- 7 - Deploy Vector Database (Qdrant)
- 8 - Build RAG pipeline with FlowsieAI/Langchain
- Class 4: Enhance Modern Application with GenAI ChatBot
- Class 5: Secure, Deliver and Optimize GenAI ChatBot
- Class 6: LLM Red Teaming