Careers360 Logo
ask-icon
share
    Compare

    Quick Facts

    Medium Of InstructionsMode Of LearningMode Of Delivery
    EnglishSelf StudyVideo and Text Based

    Courses and Certificate Fees

    Fees InformationsCertificate AvailabilityCertificate Providing Authority
    INR 1000yesIIT Delhi

    The Syllabus

    • Course Introduction
    • Introduction to NLP (NLP Pipeline, Applications of NLP)

    • Introduction to Statistical Language Models
    • Statistical Language Models: Advanced Smoothing and Evaluation

    • Introduction to Deep Learning (Perceptron, ANN, Backpropagation, CNN)
    • Introduction to PyTorch

    Word Representation
    • Word2Vec, fastText
    • GloVe
    Tokenization Strategies

    Neural Language Models
    • CNN, RNN
    • LSTM, GRU
    • Sequence-to-Sequence Models, Greedy Decoding, Beam search
    • Other Decoding Strategies: Nucleus Sampling, Temperature Sampling, Top-k Sampling
    • Attention in Sequence-to-Sequence Models

    Introduction to Transformers
    • Self and Multi-Head Attention
    • Positional Encoding and Layer Normalization
    • Implementation of Transformers using PyTorch

    • Pre-Training Strategies: ELMo, BERT (Encoder-only Model)
    • Pre-Training Strategies: Encoder-decoder and Decoder-only Models
    • Introduction to HuggingFace

    • Instruction Tuning
    • Prompt-based Learning
    • Advanced Prompting Techniques and Prompt Sensitivity
    • Alignment of Language Models with Human Feedback (RLHF)

    • Open-book question answering: The case for retrieving from structured and unstructured sources;retrieval-augmented inference and generation
    Retrieval augmentation techniques
    • Key-value memory networks in QA for simple paths in KGs
    • Early HotPotQA solvers, pointer networks, reading comprehension 
    • REALM, RAG, FiD, Unlimiformer
    • KGQA (e.g., EmbedKGQA, GrailQA)

    Knowledge graphs (KGs)
    • Representation, completion 
    • Tasks: Alignment and isomorphism
    • Distinction between graph neural networks and neural KG inference

    • Parameter-efficient Adaptation (Prompt Tuning, Prefix Tuning, LoRA) 
    • An Alternate Formulation of Transformers: Residual Stream Perspective
    • Interpretability Techniques

    • Overview of recently popular models such as GPT-4, Llama-3, Claude-3,Mistral, and Gemini
    • Ethical NLP – Bias and Toxicity
    • Conclusion

    Instructors

    Articles

    Student Community: Where Questions Find Answers

    Ask and get expert answers on exams, counselling, admissions, careers, and study options.