ChatGPT and other Large Language Models or LLMs work by take an input prompt and predicting the next "token" or most likely element by using massive tables of "pre-trained" data. All of the training data is weighted using high-dimensional vectors which are basically mathematical representations of language. Several methods are used to determine the context of the input and weight against the most likely desired output. Finally, the completed request is printed on screen. LLMs are very good at guessing the next token in a sequence, but it cannot reason or “guess” at new information. Slide presentations are available for paid members below: Subscribe to The official Ryan McBeth Substack to unlock the rest.Become a paying subscriber of The official Ryan McBeth Substack to get access to this post and other subscriber-only content. A subscription gets you:
|
Search thousands of free JavaScript snippets that you can quickly copy and paste into your web pages. Get free JavaScript tutorials, references, code, menus, calendars, popup windows, games, and much more.
How ChatGPT Works Explained Simply
Subscribe to:
Post Comments (Atom)
How to debug large, distributed systems: Antithesis
A brief history of debugging, why debugging large systems is different, and how the “multiverse debugger” built by Antithesis attempts to ta...
-
code.gs // 1. Enter sheet name where data is to be written below var SHEET_NAME = "Sheet1" ; // 2. Run > setup // // 3....
No comments:
Post a Comment