Building with Large Language Models: Hands-on Science
LLM Course Page
Dr. Alexander (Sasha) Apartsin
Dr. Alexander (Sasha) Apartsin
Natural Language Processing (NLP) is a branch of AI that enables computers to understand, analyze, and generate language, both natural (e.g., English) and synthetic (e.g., programming languages). Language is at the heart of intelligence and connects to sensory data, such as images and audio
Large Language Models are pretrained on vast quantities of language data, enabling them to understand and generate human-like text and power applications like ChatGPT. Many NLP tasks that once required complex, dedicated algorithms and models are now effectively solved by pretrained and fine-tuned LLMs.
The course takes a code-first approach, pairing every concept with extensive, hands-on examples that utilize modern libraries, including OpenAI, LangChain, HuggingFace Transformers, and LangGraph. The course equips students with a practical toolbox of ideas and tools for rapidly building LLM-based models and applications.
The course syllabus is designed to enable students to begin their projects while learning the material. As the course continues, they will enrich their projects with the concepts they acquire. Each team will give several in-class presentations for discussion and feedback.
As standard tasks are increasingly handled by AI and existing mature libraries, expectations of professional developers shift toward innovation and rapid integration. Accordingly, a key requirement for student course projects is to tackle new use cases by generating unique data and training or fine-tuning task-specific language models.
The list below presents the complete set of subjects; individual course instances may vary depending on the course format, students’ backgrounds, and class dynamics.