TBD
Advanced Data Science Techniques builds on foundational data science concepts, focusing on advanced methods for data analysis using Google Cloud Platform (GCP). The course offers 50 hours of in-person or remote training, combining lectures, guided labs, projects, and research paper discussions. Flexible attendance options and comprehensive support ensure a seamless learning experience.(If you are paying by Check or Zelle upfront, then you get the discounted rate of $3200)
STARTS SOON ON TUESDAY, Jan 27, 2026!
An eagerly awaited twelve-week course on fine-tuning large language models—with an emphasis on supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), and production-grade MLOps. The last two weeks are reserved for capstone work and presentations. The course assumes familiarity with deep learning, basic transformer architectures, and Python. It does not assume prior experience with reinforcement learning. You will work in teams of 4-6 engineers in an environment that closely reproduces the innovation and development that is the quintessence of the Silicon Valley spirit. Each team will get a meeting room with a state-of-the-art multimedia setup and a powerful AI server with RTX4090 for exclusive use during the boot camp. These will also be accessible remotely so that the teams can continue to work on them during the workdays. You will also have unrestricted access to a massive 4-GPU AI server at the admin-privilege level for the three-month duration for this course on a time-shared basis.STARTS SOON — DATE TBD!
Elevate your fine-tuning expertise with our immersive hands-on course designed for AI practitioners. Begin with the foundational concepts of transfer learning and pre-trained models, then dive into fine-tuning methodologies for transformers and other state-of-the-art architectures. Explore open-source libraries such as Hugging Face, LoRA, and PEFT for scalable and efficient fine-tuning. Master techniques like prompt tuning, adapter tuning, and hyperparameter optimization to tailor models for domain-specific tasks. Learn strategies for low-resource fine-tuning, including few-shot and zero-shot learning, and address overfitting with advanced regularization methods. Discover fine-tuning approaches for diverse modalities, including text, images, and multimodal data, while exploring domain-adaptation strategies for out-of-distribution datasets. Implement advanced training strategies like quantization-aware training, curriculum learning, and differential privacy. By the end of the course, you’ll have the practical knowledge to fine-tune models for real-world applications, ensuring optimal performance and efficiency tailored to your unique datasets.






