"Using ChatGPT feels like hiring a ghostwriter who never sleeps, never complains, and always gets the tone right." This sentiment, shared by an anonymous user, encapsulates the profound impact of AI model style on user experience, a core focus of Laurentia Romaniuk's presentation at OpenAI DevDay. Romaniuk, who leads model behavior at OpenAI, offered a compelling look behind the curtain, detailing the intricate science and philosophical considerations that shape how large language models like ChatGPT communicate. Her unique background as a librarian by trade, coupled with extensive experience at Google, Instacart, and Apple, provides a human-centric lens on the technical and ethical challenges of AI persona.
Romaniuk’s discussion centered on defining "style" in AI through three distinct components: values, traits, and flair. Values represent the non-negotiable principles models must adhere to, such as upholding the law or preventing harm, acting as immutable guardrails. Traits, conversely, are the personality characteristics models exhibit—curiosity, conciseness, warmth, or even sarcasm—which can be actively steered. Finally, flair encompasses the micro-elements like emojis or M-dashes that add subtle yet significant nuances to model responses. Together, these elements form a model's "demeanor," influencing how it adapts across specific contexts and ultimately shaping the user's perception.
The evolution of AI style, Romaniuk explained, directly correlates with how users interact with these models. Early AI models were often described as "cautious and flat," delivering facts but feeling "aloof." As models became more dynamic and adaptable in tone, user behavior shifted from mere information retrieval to collaborative engagement. Users now employ ChatGPT as a tutor, a coding partner, or even a creative ghostwriter, a testament to the increasing sophistication and relatability of AI output. This shift underscores a critical insight: AI’s communication style is not merely an aesthetic choice; it fundamentally alters how humans perceive and utilize the technology.
The process of instilling style into an AI model unfolds in three stages. It begins with pretraining and training, where a vast corpus of data imbues the model with a baseline voice, idioms, and breadth of knowledge—essentially "filling the library." This foundational phase is followed by fine-tuning, where human feedback helps refine tone, helpfulness, and safety guardrails. Romaniuk, whose work heavily involves this stage, highlighted the iterative nature of measuring and improving model adherence to these guidelines. The final layer is user-driven: context and prompting. User inputs, system instructions, and personalized settings (like memory features) continuously refine the model's style, allowing for a tailored experience.
However, achieving consistent and desirable AI style is fraught with complexities. Romaniuk stressed that humans have a natural tendency to anthropomorphize, reading intention into everything from pets to GPS systems. This inherent human trait means that AI’s demeanor, if not carefully managed, can lead users to attribute unwarranted judgment, expertise, or even agency to the model. The challenge is balancing the desire for a helpful and approachable AI with the need to prevent misinterpretation and maintain clear boundaries.
This balancing act is particularly difficult because, as Romaniuk succinctly put it, "Models don't execute rules. They predict words." Unlike traditional code that follows explicit instructions, large language models approximate patterns learned from their training data. This statistical nature makes achieving perfectly consistent, steerable, and reliable behavior across billions of contexts an "open research challenge." Even with a detailed "Model Spec" document, which outlines OpenAI's principles for model behavior and is shaped by a diverse team of researchers, safety experts, and policy makers, alignment remains an ongoing quest.
OpenAI's approach to these complexities is guided by three core principles: maximizing helpfulness and freedom, minimizing harm, and choosing sensible defaults. They aim to maximize user autonomy, allowing customization while ensuring safety guardrails are never compromised. The future of AI style, as envisioned by Romaniuk, hinges on enhanced steerability, greater contextual awareness, and improved AI literacy and accessibility. The goal is to empower users with fine-grained control over AI’s traits and flair, enabling models to appropriately adapt their tone whether drafting medical guidance or a bedtime story. Ultimately, style management should feel as intuitive as customizing a phone wallpaper, while also educating users on how to harness the powerful capabilities of these systems effectively.
How models communicate is central to how humans experience AI. Some aspects of style are fixed for safety, but generally, the aim is flexibility. OpenAI believes style should always be anchored in freedom, expanding users' ability to explore ideas rather than restricting them.

