Artificial intelligence that creates new content, including text, images, code, audio, and video, based on patterns learned from training data rather than retrieving or rearranging existing content.
Generative AI is the capability layer that most of the current AI conversation rests on. It produces new content, text, images, code, audio, video, based on patterns learned during training. You give it a prompt, it generates an output. That output did not exist before. It was not retrieved from a database or assembled from templates.
How it works
Generative models learn statistical relationships between patterns in their training data. A language model learns which words and phrases tend to follow other words and phrases across billions of documents. An image model learns relationships between visual elements and text descriptions. When you prompt the model, it generates output by predicting what should come next based on those learned patterns. The output is original in the sense that it was not copied from a specific source. It is derivative in the sense that it reflects the patterns in its training data.
What it does well and what it does not
Generative AI excels at drafting, brainstorming, summarizing, translating, reformatting, and producing first-pass content across formats. It struggles with factual accuracy (it can generate plausible but wrong information), consistency across long outputs, and anything that requires real-world knowledge beyond its training data. These limits are not bugs that will be fixed in the next version. They are structural characteristics of how the technology works.
Where it fits in the stack
Generative AI is the engine, not the product. It powers chatbots, content tools, coding assistants, personalization systems, and the reasoning layer of AI agents. Understanding what generative AI can and cannot do is the foundation for evaluating every tool and platform that builds on top of it.