Embeddings, Attention, FNN and Everything You Wanted to Know but Were Afraid to Ask
Embeddings, Attention, FNN and Everything You Wanted to Know but Were Afraid to Ask Introduction With this article, I’m attempting to start a series dedicated to LLMs (Large Language Models), neural networks, and everything related to AI. The goals of writing these articles are, of course, self-serving, as I myself have relatively recently started diving into these topics and encountered a situation where there’s a wealth of information, articles, and documents written in fine print with complex diagrams and formulas. When reading them, by the time you finish a paragraph, you forget what the previous one was about. Therefore, here I will try to describe the essence of the subject matter at a conceptual level - and thus I promise to avoid mathematical formulas and intricate graphs as much as possible, which inevitably make readers want to close their browser tab and visit the nearest liquor store. So - no formulas (except the simplest ones), no complexity, no pretense to appear smarter than I am. Your grandmother should be able to understand these articles, and if that doesn’t happen - it means I’ve failed at my task. ...