What You'll Build: Semantic search engines, PDF QA systems, conversational agents, multi-agent workflows, and a fully deployed production application with observability and safety mechanisms.
Bonus: Explore OpenAI's ChatGPT Agent Kit for building custom GPT agents.
Perfect for engineers ready to lead GenAI initiatives in their organizations.
GenAI Engineering Training Curriculum
Week 1: GenAI Foundations
Session 1: Introduction to Generative AI
Evolution from AI → ML → DL → GenAI
Core components: LLMs, Embeddings, Vector DBs, RAG, Agents
Industry use cases in healthcare, legal, and retail
Open vs closed source LLMs
Session 2: Prompt Engineering & Embeddings
Prompt Engineering Fundamentals:
Prompt structure and formatting best practices
Zero-shot, Few-shot, Chain-of-Thought techniques
ReAct, Tree-of-Thought, and prompt tuning strategies
Common prompt pitfalls and debugging
Text Embeddings & Semantic Similarity:
Intro to text embeddings and semantic similarity
Embedding models: OpenAI, Cohere, HuggingFace
Understanding vector dimensions and inference performance
Comparing cosine similarity vs Euclidean distance
Week 2: Vector Databases & RAG Basics
Session 3: Vector Databases
Intro to FAISS, Pinecone, Chroma, Weaviate
Indexing strategies: Flat, IVF, HNSW
Vector CRUD operations and metadata filtering
Performing semantic search over chunked documents
Assignment: Build a basic semantic search engine using ChromaDB
Session 4: Retrieval-Augmented Generation (RAG) with LangChain
Understanding the RAG pipeline and architecture
Chunking, indexing, and context injection
LangChain retriever + prompt + LLM orchestration
Template design for context-aware prompts
Assignment: Build a PDF QA app using RAG and LangChain
Week 3: RAG Frameworks
Session 5: RAG with Phi Data / Agno AGI
Modular RAG workflows using Phi Data or Agno AGI
Handling complex inputs and retrieval logic
Lightweight integration with LLM APIs
Combining structured and unstructured sources
Session 6: RAG with Amazon Bedrock
Using Titan, Claude, and other models via Bedrock
Setting up and securing a RAG workflow in Bedrock
Integration with S3, RDS, or DynamoDB
Monitoring cost and performance
Week 4: Advanced RAG
Session 7: RAG Workflow with LangFlow
LangFlow overview and no-code chaining
Node types: retriever, prompt, LLM, output
Combining structured and unstructured tools
Exporting and deploying LangFlow project
Session 8: Advanced RAG Techniques
Advanced chunking strategies and document preprocessing
Hybrid search: combining semantic and keyword search
Query transformation and routing
Context compression and reranking
Assignment: Implement hybrid search with reranking in a RAG pipeline
Week 5: Agentic AI
Session 9: Agentic AI Concepts + LangChain Agents
What is Agentic AI? Tools, Memory, Planning
LangChain agent executor and agent types
Connecting LLMs with tools (API, calculator, search)
Adding memory to track multi-turn context
Session 10: Build Agents with LangFlow
Using LangFlow to wire agents visually
Creating tools and chaining agent steps
Building a multi-turn conversational agent
Exporting and reusing LangFlow configs
Week 6: Agent Orchestration & Protocols
Session 11: Multi-Agent Workflows with CrewAI
CrewAI: roles, tasks, and collaborative agents
Assigning memory and tools to agents
Simulating agent-based research and review
Evaluating multi-agent output quality
Session 12: MCP and A2A Protocols
Model Context Protocol (MCP):
Understanding MCP architecture and components
Connecting AI models to external data sources
Building MCP servers and clients
Use cases: database integration, API connections, tool access
Agent-to-Agent (A2A) Protocols:
Inter-agent communication standards
Message passing and coordination patterns
Protocol design for multi-agent systems
Integration with existing agent frameworks
Week 7: Observability & Evaluation
Session 13: Observability with LangChain + LangSmith
Logging prompt input/output, latency, and feedback
Tracing tool and memory steps in LangChain apps
Evaluating hallucinations and grounding
LangSmith dashboard walkthrough
Session 14: LangWatch for Evaluation & Feedback Loops
LangWatch vs LangSmith comparison
Live monitoring, prompt scoring, and user feedback
Custom evaluation functions (RAGAS, MT-Bench)
Integrating with continuous improvement loop
Week 8: Production Readiness & Capstone
Session 15: Deployment Strategies & Guardrails
Deployment Options:
Containerization with Docker
Cloud deployment: AWS, Azure, GCP
Serverless architectures (Lambda, Cloud Functions)
API gateway setup and management
Scaling & Performance:
Load balancing and auto-scaling
Caching strategies for LLM responses
Rate limiting and quota management
Cost optimization techniques
AI Safety Mechanisms:
Content filtering and moderation
Prompt injection prevention
Output validation and sanitization
Implementing Guardrails AI or NeMo Guardrails
Security Best Practices:
API key management and rotation
Data privacy and compliance (GDPR, HIPAA)
Audit logging and access controls
Testing for adversarial inputs
Session 16: Capstone Project
Choose a domain: Healthcare, Legal, HR, or Finance
Build end-to-end GenAI application: Embed → RAG → Agent → Guardrails → Observability
Prepare comprehensive documentation: Architecture diagram, GitHub repository, deployment guide
Development, testing, and deployment phase
Live presentation and demo
Final deliverables: Deployed application + GitHub repo + walkthrough video
Peer review and feedback session
Course completion and certification
Bonus Session: ChatGPT Agent Kit
Session 17: Building with ChatGPT Agent Kit
ChatGPT Agent Kit Overview:
Introduction to OpenAI's Agent Kit framework
Understanding the Agent Kit architecture
Pre-built components and templates
Building Custom GPTs:
Creating custom actions and functions
Integrating external APIs and tools
Managing conversation context and memory
Advanced Agent Patterns:
Multi-step reasoning workflows
Tool chaining and orchestration
Error handling and fallback strategies
Production Deployment:
Publishing custom GPTs
Monitoring usage and performance
Iterating based on user feedback
Assignment: Build and deploy a custom GPT agent using ChatGPT Agent Kit

