Large Language Models
Learn about the dangers and biases of ChatGPT and other chat-based applications.
We'll cover the following
Large language models (LLMs) are all the rage at the moment. Companies like Google, OpenAI, and Amazon have all invested incredible amounts of money into the management and training of LLMs.
To define them in simple terms, LLMs are nothing but extremely large neural networks (typically with several attention layers, which allow for better processing of sequences and contexts) meant for NLP tasks. They typically have an enormous number of parameters (in the billions or even hundreds of billions) and require a ridiculous amount of input data. LLMs encode information from a training set in their many parameters and can accurately leverage them upon inference. Pretrained models are available for the public to use out-of-the-box or through a process called fine-tuning (adapting the pretrained model for a specific task).
The rise of ChatGPT, Google Bard, and other chat-based interfaces have created an explosion in popularity for LLMs, but there are countless risks that the companies in charge have largely ignored. Because most firms in today’s age are attempting to solve NLP problems with LLMs, this lesson covers these risks.
“Hallucinations”
The problem with chat-based interfaces using LLMs is they give the impression that the model actually understands human text and input. This is not true. LLMs are just very good at taking text and converting it into vectors, then looking through its encoded values to predict the next word in the sentence.
Get hands-on with 1400+ tech skills courses.