top of page

Vibe Documentation of Requirements. Can AI Permanently Change the Way You Create Documentation?



Introduction

Generative and other forms of artificial intelligence are actively and steadily integrating into our lives and professional routines, becoming an integral part of everyday workflows.

Business analysis is no exception. In particular, my approach to documenting requirements has changed radically over the past 6–9 months. If a year ago working with AI to write specifications was an experiment for me interesting but not systematic today it is simply everyday routine. A significant portion of requirements I no longer write in the classical sense I dictate them. Generative AI transforms this stream of thoughts and information into a structured, readable, and actionable specification.


This article is about a practice I call Vibe Documentation of Requirements, as part of a broader approach Vibe Business Analysis business analysis in the reality of generative AI.


And why not? Andrej Karpathy introduced the concept of “vibe coding” as a process where you describe in natural language by voice the “vibe” of what you want to receive, and the LLM generates the required code for you. In this article, you will see how this principle extends to such a labor-intensive and important task as creating requirements documentation. So yes we now have vibe documentation and vibe analysis.


What does it look like? I open a chat and simply speak about the requirements, describing details and new information, while the AI service turns my thoughts into a ready specification with structure, acceptance criteria, and the necessary level of detail.



Vibe requirements documentation

Why Voice + AI Is Convenient

One of the key advantages of modern chat tools from large language model providers is the ability to simply speak. No typing, no formatting, no thinking about sentence structure. You just explain what the system should do, and the service converts it into structured text according to your instructions.


For me, the primary tool here is ChatGPT and the main reason is simple: it supports speech-to-text in both English and includes my native Ukrainian language. Gemini and Claude do not yet support Ukrainian voice input, but communication in English works smoothly (though it is still slightly worse than OpenAI’s).


So, sitting at my computer, I press the microphone button, say what I have in mind, and receive the recognized text, which is then processed by the model.


The key is the principle: you speak, AI documents. This removes a huge barrier, because a significant part of documentation fatigue is not analysis it is simply sitting down and writing.

However, voice is only the input. The quality of the result depends on two things: the right prompt and a comprehensive project context.

Let’s break it down.


Preparation: A Prompt That Produces Requirements that Work

By default, what AI services generate without additional instructions is dry, detached from reality, lacking detail and structure. They do not know your quality standards until you teach them.


To ensure your digital assistant generates high-quality specifications and not just something that “looks like requirements” preparation is necessary. The first thing you need is a prompt that describes the requirement's final structure. Essentially, it is your template, your quality standard.


Here is an approximate structure I use:

  1. General Introduction a few sentences explaining what the feature is about. Context for someone reading the specification for the first time.

  2. User Story written in the classic Who / What / Why format. Who is the user, what they want to do, and why. This keeps the focus on value.

  3. Acceptance Criteria the most complex part, deserving special attention.


What Acceptance Criteria Should Contain

For me, acceptance criteria are clear expectations of the system, product, or feature we are building. Not abstract statements like “the system should work correctly,” but concrete descriptions of expected outcomes.


They include:

  • Domain details field names, specific values, data formats

  • Behavioral logic what happens under certain actions, conditions, branches

  • Navigation  how users enter and exit the interface

  • Data  what data is displayed, where it comes from, and how it updates

  • Access rights  who has access and under what conditions

  • Edge cases  errors, empty states, atypical behavior

  • Out of Scope  what is explicitly not covered


This raises a natural question: how do you systematically consider all these details without missing anything?


For this, I developed a requirements specification framework a mental model described in a series of posts on my Ukrainian Telegram channel (and one day might bring it to my LinkedIn). It is not a document template, but rather a thinking checklist for detailing requirements. The framework consists of five elements representing requirements for a typical web application. Let’s take a look:

  1. Lists  tables and selectors. Consider data source, sorting, grouping, pagination, export.

  2. Fields  the basic building block of interfaces. Consider data type, default values, input restrictions, validation, and error messages.

  3. Groups  containers such as forms, sections, pages, popups. Consider navigation path, display conditions, states, and empty state.

  4. Filters and Search  mechanisms for narrowing data. Consider filter type, default values, dependencies, and combination logic.

  5. Functional Logic  how everything interacts: navigation, triggers (manual and automatic), processing logic (CRUD, status changes, business rules), import/export.


This framework is not a checklist for ticking boxes, but a way to think about requirements systematically, not missing important things.


And all this can and should be laid into the prompt. The more detailed you describe the structure of the expected result, the closer to the final look the AI-generated result will be. A significant part of these instructions I keep in the user instructions (Custom Instructions) of the corresponding GPT (or Gem). In this way, I not only avoid repeating instructions (which, to put it mildly, would be inefficient to insert or speak each time), but also get the needed answers in the expected format right away each time I “communicate” with AI.


Context – the key to quality

A prompt with structure is only half the success. The second half is context.

When we talk about a Custom GPT, Gem, or automation through an API it is critically important that the tool is filled with project context. Without this AI will guess, invent, “hallucinate” and you will spend more time correcting than you save on generation.


What is context in this case?

This is not a couple of sentences: “We are making a CRM for small businesses.” This is a full description of the project, which can include:

  • BRD, Solution Vision, or Business Case  the overall vision of the solution, its boundaries, key modules, constraints, etc.

  • Initial list of requirements: top- and mid-level structure, decomposition into epics and features

  • Dependencies, components, and integrations  systems with which the product interacts and exchanges data.


The task of initial decomposition, defining scope, and creating general documents is at a higher level and must be solved earlier, before the start of the project. Here we are talking about the situation in the direct development phase, when the overall scope is already defined, and your task is to detail specific user stories into full specifications. However, somehow, once on one presale, I sat for 2 hours, dictated what was happening on 700 screens of the customer’s design (everything that he had on hand), and got a very high-quality decomposition so vibe-documentation works here too.


In general, the better context you provide to AI the fewer iterations will be needed to get a result that can be taken into work right away.


context for requirements generation

Tips on working with context

It is not enough to simply upload documents into a Custom GPT it is important that AI can work with them effectively.


The content side

  • Structure documents clearly  use headings, numbering, and separation into sections. AI navigates much better in a document with a logical hierarchy than in a continuous stream of text.

  • Name entities consistently  if in the source document a module is called “user management,” do not call it in conversations during work with requirements “users module.” Consistency of terminology reduces “hallucinations” and “misunderstandings” during content generation.

  • Keep context up to date  outdated information is often worse than its absence. If requirements changed, update context, otherwise AI will generate specifications based on outdated information.

  • Do not overload  if you upload everything at once (150 stories, minutes of 20 meetings, correspondence), AI can “get lost” in priorities. It is better to give the main core of context (vision + scope + requirements structure), and add details already in specific conversations.

  • Add a glossary  even a short list of key project terms with explanations significantly improves generation quality, especially if the project has domain specificity.


The technical side:

  • Format matters  simple text formats (.md, .txt) work better than complex .docx or .pdf with tables and attachments. If you have documentation in Confluence or Google Docs export to Markdown before uploading.

  • Control size  each service has limits on the amount of context (the so-called “context window”). If the document is too large, AI may simply ignore its end. It is better to split one big document into several logical parts than to upload a monolithic file of 100 pages.

  • Highlight priority information  what stands at the beginning of a document or conversation, AI usually “remembers” better. Place the most important glossary, key business rules, constraints closer to the beginning.

  • Use markup  headings, lists, separators, and markdown formatting help AI distinguish sections and find relevant information faster. You can even add labels like[SCOPE], [GLOSSARY], [RULES] for navigation.

  • Optimize tokens  the context window is measured in tokens, and their number is limited. Use token counters (for example, from OpenAI) to understand how much “space” your context takes. Clean text from unnecessary words, shorten where possible (for example, “&” instead of “and”), and remove excessive formatting. This is like refactoring code you remove noise, leave the essence. But keep balance: context must remain readable for you, because it still needs to be maintained and updated.

  • Test what AI “sees”  after uploading context, ask a few verification questions: “What modules are in the project?”, “What does the term X mean?”. If the answers are inaccurate, the context needs improvement.


Vibes in Elicitation

It is worth mentioning briefly how to process the results of discovery.


Integration of discovery meetings at the moment of writing the article, we have not yet reached full automation of integrating meetings into the requirements generation process. But if you transcribe requirements or write minutes manually (I must admit that modern services have already learned to do this and do it well), for now ctrl+c/ctrl+v of this information is enough for the necessary update of one or another requirements specification.

Our BA meetings often cover information on several requirements, stories, specifications, or even epics at the same time. Here I am, a supporter of semi-manual input: I generate a list of key agreements and changes, then search only for what is relevant to a specific requirement and ask to integrate it.


In the same way, you can integrate chats with these same interested parties.

And if we talk about “vibe documentation,” then my scenario is I reread my own notes, tell my AI chat “listen, there was a meeting with X and Z, they said that … integrate this information into the requirements,” and it changes what needs to be changed.


If we need to integrate certain documentation into requirements here it is still a bit difficult, because documents are often large and have a lot of what you do not need for your specification. If you cannot dictate something from there, then I recommend working with documents separately. NotebookLM is ideal for reviewing large documents and answering questions clearly within the uploaded files.



Confidentiality and privacy

Working with AI services requires a conscious approach to the data that you transmit. Depending on the situation, we have three simple rules at hand:

  1. Use corporate AI services  practically all popular providers of LLM-based solutions offer companies tailored solutions that guarantee privacy and data security, as well as protection against training models using this data.

  2. Avoid placing in AI chats data that has a certain level of uniqueness (products or features that practically do not exist on the market), keeps a trade secret (prices, lists of counterparties, etc.), or contains personal data (clients, users) or is simply critical for the security of your information systems (API keys, passwords). An important caveat: even if AI companies guarantee you protection of information, no one guarantees that tomorrow they will not be hacked by a bunch of hackers and they will still get to it.

  3. Anonymize data. As before, this is especially relevant in your private accounts of AI services. Do not use real names of companies, products, people, etc. Replace them with “Product A”, “product sponsor John Smith”).


Gray reality

Of course, hallucinations and unnecessary content in answers do not disappear. Often, after receiving requirements, I dictate changes. Here, it is worth explaining exactly what needs to be changed and where in the already formed requirements.


Gemini, for example, perceives my instructions too literally, rewriting all requirements instead of one piece (and overwriting the nicely initially formulated requirements). ChatGPT makes changes “pointwise” according to my voice commands here like.


But sometimes understanding does not come, and it is easier to write something yourself. Sometimes the dialogue goes the wrong way AI “drifts,” loses context, understanding of concepts, and generates strange things. Fortunately, this happens less and less with the development of models.

AI still makes mistakes. For example, I asked ChatGPT to delete the block of acceptance criteria after we moved its content into another section. GPT deleted, but did not move several points. I noticed this only by one of them, and I had to check all twenty. So be vigilant with these rogues).


And progress does not stand still; many more problems will be solved for example, instruction files/“skills” like claude.md.


business analyst and artificial intelligence

Conclusion

So, can AI permanently change the way documentation is created? Yes but not instead of you, but together with you. Vibe Documentation is not about delegating thinking to the machine it is about accelerating the formalization of thought and changing the approach to interacting with the computer. You are still responsible for the frame, the structure, completeness, common sense, and business value.


But instead of spending energy on mechanical writing, you concentrate on analysis, clarifications, and decisions. AI does not remove the business analyst’s responsibility it highlights where this responsibility really lies. And if you learn to work with it systematically – with a prompt, context, and discipline – documentation stops being an exhausting routine and becomes a fast, almost conversational process of thinking.


And how do you document requirements now and have you tried the vibe approach?


Art of Business Analysis training schedule 


News and articles on business analysis: 



 
 
 

Комментарии


bottom of page