Hands-On Large Language Models Audiobook By Jay Alammar, Maarten Grootendorst cover art

Hands-On Large Language Models

Language Understanding and Generation

Preview

Get 30 days of Standard free

Auto-renews at $8.99/mo after 30-day trial. Cancel anytime
Try for $0.00
More purchase options

Hands-On Large Language Models

By: Jay Alammar, Maarten Grootendorst
Narrated by: Derek Shoales
Try for $0.00

$8.99 a month after 30 days. Cancel anytime.

Buy for $22.66

Buy for $22.66

AI has acquired startling new language capabilities in just the past few years. Driven by rapid advances in deep learning, language AI systems are able to write and understand text better than ever before. This trend is enabling new features, products, and entire industries. With this book, listeners will learn practical tools and concepts they need to use these capabilities today.

You'll understand how to use pretrained large language models for use cases like copywriting and summarization; create semantic search systems that go beyond keyword matching; and use existing libraries and pretrained models for text classification, search, and clusterings.

This book also helps you understand the architecture of Transformer language models that excel at text generation and representation; build advanced LLM pipelines to cluster text documents and explore the topics they cover; build semantic search engines that go beyond keyword search, using methods like dense retrieval and rerankers; explore how generative models can be used, from prompt engineering all the way to retrieval-augmented generation; and gain a deeper understanding of how to train LLMs and optimize them for specific applications using generative model fine-tuning, contrastive fine-tuning, and in-context learning.

©2024 Jay Alammar and Maarten Pieter Grootendorst (P)2024 Ascent Audio
Computer Science Data Science Machine Learning Programming
adbl_web_anon_alc_button_suppression_c
All stars
Most relevant
The author’s style makes this hard to listen to and the experience dry.

This book has a lot of interesting details. Unfortunately, the author spells out every input, token, training example as literal text: “open parenthetical”, “word one”, “word two”, “comma”, “comma”, “closed parenthetical” and as a result is one of those texts that are likely better as a physical textbook.

open parenthetical, close parenthetical, comma, comma, space

Something went wrong. Please try again in a few minutes.

I dont need to know each quotation mark and space when explaining code fragments.

this is take way too much of the book

The reading of code is done badly

Something went wrong. Please try again in a few minutes.