Underneath the hood of each AI software are algorithms that churn by information in their very own language, one based mostly on a vocabulary of tokens.
Tokens are tiny items of knowledge that come from breaking down larger chunks of data. AI fashions course of tokens to be taught the relationships between them and unlock capabilities together with prediction, technology and reasoning. The sooner tokens will be processed, the sooner fashions can be taught and reply.
AI factories — a brand new class of knowledge facilities designed to speed up AI workloads — effectively crunch by tokens, changing them from the language of AI to the foreign money of AI, which is intelligence.
With AI factories, enterprises can benefit from the most recent full-stack computing options to course of extra tokens at decrease computational value, creating further worth for purchasers. In a single case, integrating software program optimizations and adopting the most recent technology NVIDIA GPUs lowered value per token by 20x in comparison with unoptimized processes on previous-generation GPUs — delivering 25x extra income in simply 4 weeks.
By effectively processing tokens, AI factories are manufacturing intelligence — essentially the most useful asset within the new industrial revolution powered by AI.
What Is Tokenization?
Whether or not a transformer AI mannequin is processing textual content, pictures, audio clips, movies or one other modality, it is going to translate the info into tokens. This course of is named tokenization.
Environment friendly tokenization helps cut back the quantity of computing energy required for coaching and inference. There are quite a few tokenization strategies — and tokenizers tailor-made for particular information varieties and use circumstances can require a smaller vocabulary, that means there are fewer tokens to course of.
For massive language fashions (LLMs), quick phrases could also be represented with a single token, whereas longer phrases could also be cut up into two or extra tokens.
The phrase darkness, for instance, could be cut up into two tokens, “darkish” and “ness,” with every token bearing a numerical illustration, corresponding to 217 and 655. The other phrase, brightness, would equally be cut up into “vivid” and “ness,” with corresponding numerical representations of 491 and 655.
On this instance, the shared numerical worth related to “ness” might help the AI mannequin perceive that the phrases might have one thing in frequent. In different conditions, a tokenizer might assign completely different numerical representations for a similar phrase relying on its that means in context.
For instance, the phrase “lie” may discuss with a resting place or to saying one thing untruthful. Throughout coaching, the mannequin would be taught the excellence between these two meanings and assign them completely different token numbers.
For visible AI fashions that course of pictures, video or sensor information, a tokenizer might help map visible inputs like pixels or voxels right into a sequence of discrete tokens.
Fashions that course of audio might flip quick clips into spectrograms — visible depictions of sound waves over time that may then be processed as pictures. Different audio purposes might as a substitute deal with capturing the that means of a sound clip containing speech, and use one other form of tokenizer that captures semantic tokens, which characterize language or context information as a substitute of merely acoustic info.
How Are Tokens Used Throughout AI Coaching?
Coaching an AI mannequin begins with the tokenization of the coaching dataset.
Based mostly on the dimensions of the coaching information, the variety of tokens can quantity within the billions or trillions — and, per the pretraining scaling legislation, the extra tokens used for coaching, the higher the standard of the AI mannequin.
As an AI mannequin is pretrained, it’s examined by being proven a pattern set of tokens and requested to foretell the subsequent token. Based mostly on whether or not or not its prediction is appropriate, the mannequin updates itself to enhance its subsequent guess. This course of is repeated till the mannequin learns from its errors and reaches a goal degree of accuracy, often called mannequin convergence.
After pretraining, fashions are additional improved by post-training, the place they proceed to be taught on a subset of tokens related to the use case the place they’ll be deployed. These could possibly be tokens with domain-specific info for an software in legislation, drugs or enterprise — or tokens that assist tailor the mannequin to a selected activity, like reasoning, chat or translation. The aim is a mannequin that generates the appropriate tokens to ship an accurate response based mostly on a person’s question — a ability higher often called inference.
How Are Tokens Used Throughout AI Inference and Reasoning?
Throughout inference, an AI receives a immediate — which, relying on the mannequin, could also be textual content, picture, audio clip, video, sensor information and even gene sequence — that it interprets right into a sequence of tokens. The mannequin processes these enter tokens, generates its response as tokens after which interprets it to the person’s anticipated format.
Enter and output languages will be completely different, corresponding to in a mannequin that interprets English to Japanese, or one which converts textual content prompts into pictures.
To grasp an entire immediate, AI fashions should be capable of course of a number of tokens directly. Many fashions have a specified restrict, known as a context window — and completely different use circumstances require completely different context window sizes.
A mannequin that may course of a number of thousand tokens directly would possibly be capable of course of a single high-resolution picture or a number of pages of textual content. With a context size of tens of 1000’s of tokens, one other mannequin would possibly be capable of summarize a complete novel or an hourlong podcast episode. Some fashions even present context lengths of 1,000,000 or extra tokens, permitting customers to enter large information sources for the AI to investigate.
Reasoning AI fashions, the most recent development in LLMs, can deal with extra complicated queries by treating tokens otherwise than earlier than. Right here, along with enter and output tokens, the mannequin generates a number of reasoning tokens over minutes or hours because it thinks about methods to resolve a given drawback.
These reasoning tokens permit for higher responses to complicated questions, similar to how an individual can formulate a greater reply given time to work by an issue. The corresponding enhance in tokens per immediate can require over 100x extra compute in contrast with a single inference cross on a standard LLM — an instance of test-time scaling, aka lengthy pondering.
How Do Tokens Drive AI Economics?
Throughout pretraining and post-training, tokens equate to funding into intelligence, and through inference, they drive value and income. In order AI purposes proliferate, new rules of AI economics are rising.
AI factories are constructed to maintain high-volume inference, manufacturing intelligence for customers by turning tokens into monetizable insights. That’s why a rising variety of AI companies are measuring the worth of their merchandise based mostly on the variety of tokens consumed and generated, providing pricing plans based mostly on a mannequin’s charges of token enter and output.
Some token pricing plans supply customers a set variety of tokens shared between enter and output. Based mostly on these token limits, a buyer may use a brief textual content immediate that makes use of only a few tokens for the enter to generate a prolonged, AI-generated response that took 1000’s of tokens because the output. Or a person may spend nearly all of their tokens on enter, offering an AI mannequin with a set of paperwork to summarize into a number of bullet factors.
To serve a excessive quantity of concurrent customers, some AI companies additionally set token limits, the utmost variety of tokens per minute generated for a person person.
Tokens additionally outline the person expertise for AI companies. Time to first token, the latency between a person submitting a immediate and the AI mannequin beginning to reply, and inter-token or token-to-token latency, the speed at which subsequent output tokens are generated, decide how an finish person experiences the output of an AI software.
There are tradeoffs concerned for every metric, and the appropriate stability is dictated by use case.
For LLM-based chatbots, shortening the time to first token might help enhance person engagement by sustaining a conversational tempo with out unnatural pauses. Optimizing inter-token latency can allow textual content technology fashions to match the studying velocity of a median particular person, or video technology fashions to realize a desired body fee. For AI fashions participating in lengthy pondering and analysis, extra emphasis is positioned on producing high-quality tokens, even when it provides latency.
Builders should strike a stability between these metrics to ship high-quality person experiences with optimum throughput, the variety of tokens an AI manufacturing facility can generate.
To deal with these challenges, the NVIDIA AI platform affords an enormous assortment of software program, microservices and blueprints alongside highly effective accelerated computing infrastructure — a versatile, full-stack answer that allows enterprises to evolve, optimize and scale AI factories to generate the subsequent wave of intelligence throughout industries.
Understanding methods to optimize token utilization throughout completely different duties might help builders, enterprises and even finish customers reap essentially the most worth from their AI purposes.
Be taught extra in this book and get began at construct.nvidia.com.