❗The content presented here is sourced directly from Youtube platform. For comprehensive course details, including enrollment information, simply click on the 'Go to class' link on our website.
Updated in [February 21st, 2023]
What does this course tell?
(Please note that the following overview content is from the original platform)
Introduction.
3-Gram Model (Shannon 1951).
Recurrent Neural Nets (Sutskever et al 2011).
Big LSTM (Jozefowicz et al 2016).
Transformer (Llu and Saleh et al 2018).
GPT-2: Big Transformer (Radford et al 2019).
GPT-3: Very Big Transformer (Brown et al 2019).
GPT-3: Can Humans Detect Generated News Articles?.
Why Unsupervised Learning?.
Is there a Big Trove of Unlabeled Data?.
Why Use Autoregressive Generative Models for Unsupervised Learnin.
Unsupervised Sentiment Neuron (Radford et al 2017).
Radford et al 2018).
Zero-Shot Reading Comprehension.
GPT-2: Zero-Shot Translation.
Language Model Metalearning.
GPT-3: Few Shot Arithmetic.
GPT-3: Few Shot Word Unscrambling.
GPT-3: General Few Shot Learning.
IGPT (Chen et al 2020): Can we apply GPT to images?.
IGPT: Completions.
IGPT: Feature Learning.
Isn't Code Just Another Modality?.
The HumanEval Dataset.
The Pass @ K Metric.
Codex: Training Details.
An Easy Human Eval Problem (
[email protected]
-0.9).
A Medium HumanEval Problem (
[email protected]
-0.17).
A Hard HumanEval Problem (
[email protected]
-0.005).
Calibrating Sampling Temperature for
[email protected]
The Unreasonable Effectiveness of Sampling.
Can We Approximate Sampling Against an Oracle?.
Main Figure.
Limitations.
Conclusion.
Acknowledgements.
We consider the value of this course from multiple aspects, and finally summarize it for you from three aspects: personal skills, career development, and further study:
(Kindly be aware that our content is optimized by AI tools while also undergoing moderation carefully from our editorial staff.)
This course introduces the concept of unsupervised learning as a means of natural language processing. It covers different algorithms for building generative models for text and language understanding, including 3-gram models, recurrent neural nets, big LSTM, the Transformer, GPT-2, GPT-3, the Unsupervised Sentiment Neuron, and Zero-Shot Reading Comprehension. It also explores the possibility of applying GPT-3 to images through IGPT, examining the HumanEval dataset and the
[email protected]
metric as a measure of human evaluation.
Possible Development Paths:
Learners of this course can develop their skills in natural language processing and unsupervised learning. They can use the algorithms and techniques learned in this course to build generative models for text and language understanding, and apply GPT-3 to images.
Learning Suggestions:
Learners of this course should also consider related topics such as machine learning, deep learning, natural language processing, and computer vision. They should also explore other datasets and metrics for measuring the performance of their models. Additionally, they should practice applying the algorithms and techniques learned in this course to real-world problems.