Role
Process
Output
As artificial intelligence becomes increasingly capable of generating appropriate text, content designers are confronted with an unavoidable question: if AI can produce the very medium we work with, what remains distinctly human about our role?
This question became especially urgent during our recent work designing public-sector digital services. Unlike the private sector, public service design operates with unique constraints. One of them being the iteration windows are exceptionally small. Products carry the name and credibility of a ministry or government agency, and there is little tolerance for rough drafts or “we’ll fix it later.” Often, the first draft must closely resemble the final version and we only have one chance to get the language right.
These pressures create a specific vulnerability for content: when time is tight and requirements are ambiguous, copy is often the first element to be compromised. It is not uncommon for prototypes to surface with placeholder text no writer has reviewed, or for production-ready copy to appear seemingly from nowhere; text that neither reflects the intended tone nor aligns with established guidelines. Even in close writer–designer collaboration, copy frequently slips through cracks in the process.
Public-facing language carries institutional weight. Hence, poorly executed copy does more than confuse; it diminishes trust, signals unreliability, and risks misrepresenting the values of the institution it represents. In a world where bad writing can influence perceptions as quickly as bad visuals, the stakes of linguistic precision become high.
These challenges motivated us to explore whether AI could help us ensure copy quality earlier, not by automating writing, but by making our principles present at the very first iteration.
Our experiment began with Vercel’s v0.dev, an AI-assisted prototyping tool capable of generating interface layouts that align with a provided design system. This raised an intriguing question: if a design system can meaningfully influence AI outputs, could a copy guideline do the same? After all, a guideline is a guideline… right?
Well.
We quickly learned it wasn’t that simple.
Like most content teams, our copy guidelines were descriptive slide decks; human-readable, full of examples and rationale, but ultimately too implicit for machines. They rarely live in the actual design process. They sit in folders, referenced occasionally, and disconnected from real-time decisions.
Feeding the existing guideline into v0.dev resulted in outputs that were technically coherent yet still inconsistent: English terms appeared unexpectedly, capitalization was unpredictable, terminology drifted, and tone varied across screens.
In contrast, the design system produced far more consistent AI-generated components. This discrepancy made one thing clear: design systems are naturally machine-readable because they are built on explicit patterns, rules, and constraints.
Copy guidelines are not.
This insight led us to collaborate closely with designers and engineers to reimagine our guideline as a structured, logical system. We began identifying linguistic patterns, tightening vague principles, and rewriting rules so they resembled conditional statements:
In a sense, teaching AI to understand our guidelines became a way of testing whether those guidelines truly worked. So in short, we transformed a descriptive document into a semi-technical specification; something a machine could interpret without guessing.
With this newly structured guideline, the outputs improved dramatically. The AI began producing text that adhered to preferred terminology, capitalization rules, and tonal boundaries. It was not perfect, but it was aligned enough that the first draft finally resembled the direction we intended.
Yet the most meaningful lessons from this experiment had little to do with the outputs themselves.
As we incorporated the technical guideline into real workflows, we discovered inconsistencies and gaps we had never noticed before. Certain terms lacked hierarchy. Some tone rules contradicted others. Several patterns we assumed were “obvious” had never been formalized.
The AI did not malfunction; it merely surfaced the ambiguity that had always been present.
This is where treating guidelines as a living document became essential. It's not about getting it perfect upfront. It's about building something that can be continuously fine-tuned based on real outputs and real challenges.
In the public sector, where clarity must exist from the first draft, this transparency is invaluable. The experiment revealed that AI cannot define quality; it can only follow the quality we articulate. Content designers remain the ones who determine what clarity, helpfulness, and appropriateness mean in practice.
Our judgment decides the threshold for “good.”
One unexpected outcome of AI-assisted prototyping was the visibility it gave to product issues. Early AI drafts often exposed vague requirements, unclear user goals, or misaligned flows. Contradictions that normally surface much later.
This shifted our role. We were no longer waiting for near-final screens to insert language, but were integrated into early discussions, shaping how features were understood and framed alongside designers and product team. We shaped the voice before the interface existed.
Instead of answering, “What should this button say?” we began asking, “What is the user actually trying to accomplish here?”
This is the heart of strategic content design.
AI can produce text, but it cannot interpret policy ambiguity, question product assumptions, or identify inconsistencies in user journeys. Content designers, positioned early and informed, can.
Even with structured guidelines, AI would sometimes make confident assertions that were completely wrong and misinterpret context in creative ways.
So we start debating.
These mistakes were not simply errors to correct, they were opportunities to probe the model’s reasoning. Instead of just correcting outputs, we started having conversations:
Treating AI as a conversational partner, rather than a passive generator, sharpened our own reasoning. Each exchange revealed the strengths and weaknesses of our guideline and clarified the boundaries of our system.
This process reinforced a core truth: AI can generate options, but humans have the sensibility to decide which option is appropriate. Especially in the public sector context.
And so we return to the original question: if AI can generate text, what remains our role?
Perhaps the more relevant question is:
What the role is evolving into, and how do we actively shape it as AI continues to grow?
There is no fixed answer, not yet. What feels certain today may become irrelevant tomorrow. The landscape evolves too quickly for static definitions. And perhaps it’s not about having all the answers, but it’s about how we continuously question it.
AI will generate text. That’s not going to change.
What remains open, and what we have the privilege to influence, is how we integrate that text into meaningful, trustworthy communication. How we build systems around it. How we collaborate with it. And how we preserve the human qualities that cannot be automated.
This work is not something any one of us can do alone. It emerges through conversations, experiments, and shared reflection on what remains distinctly human in our craft.
What we know now is this: When we create space for content designers to influence products earlier and more collaboratively, we establish the groundwork for coexistence. And the more we engage with AI, questioning it, refining it, using it to sharpen our own principles and how we bring them to practice, the clearer our role becomes.
AI is not the author. AI is an amplifier.
And coexistence begins when we decide what we want amplified.