"Datawork is deeply human," asserts Adriana Alvarado, Staff Research Scientist at IBM, a statement that cuts directly to the often-overlooked core of artificial intelligence development. Her presentation, "LLM + Data: Building AI with Real & Synthetic Data," illuminated the intricate relationship between Large Language Models (LLMs) and the data that underpins them. Alvarado underscored that while AI's capabilities continue to evolve at a breathtaking pace, the quality and characteristics of its foundational data remain paramount, demanding a human-centered approach often obscured by technical jargon.
Every AI model, from the simplest algorithm to the most sophisticated LLM, begins and ends with data. This fundamental truth means that the choices made during data collection, curation, and preparation are not merely technical steps but critical decisions that profoundly influence an AI system's ultimate performance, fairness, and utility. The rapid ascent of LLMs has only amplified this dependency, positioning data as the undisputed engine behind chatbots, generative AI, and countless other emerging technologies shaping our digital future.
Alvarado argues that the entire lifecycle of data—from its initial collection and annotation to its eventual deployment and iterative refinement—is a process she terms "datawork." This concept highlights the continuous, often painstaking, day-to-day effort involved in producing, managing, and utilizing data, an effort that is "deeply human." For founders and VCs investing in AI, understanding datawork means recognizing that the raw material of AI is not a static resource but a dynamic, human-shaped construct. Despite its undeniable value, datawork is frequently overlooked, undervalued, and sometimes even considered invisible within the broader AI development landscape. Yet, it is precisely these human-driven decisions, often involving complex social, ethical, and technical considerations, that ultimately dictate how AI systems function and perform in the real world. "Every single decision that they make has a downstream effect on model performance," Alvarado stresses, underscoring the direct and profound link between human choices in data handling and the operational integrity of AI products and services.
The human element in datawork also introduces significant challenges, particularly concerning inherent biases. When practitioners make choices about categorizing or labeling data for a dataset, they are implicitly deciding "who gets to be represented and who doesn't." This selection process, often influenced by the perspectives and biases of the data curators themselves, is rarely neutral. Consequently, Alvarado points out that "most of the datasets used to train AI systems currently do not represent the world equally." They tend to over-represent certain regions, languages, or cultural perspectives while under-representing others, creating substantial gaps in how models respond to diverse questions or interact with varied user groups. For startup leaders and tech insiders, this means that even the most technically advanced LLM, if trained on skewed data, risks perpetuating existing societal inequalities, leading to AI systems that are less equitable, less effective, and potentially harmful when deployed in sensitive applications or global markets. Addressing this requires not just technical fixes, but a conscious, human-centered approach to data curation that prioritizes fairness and inclusivity from the outset.
For Large Language Models, the stakes surrounding data are considerably higher due to their immense scale and emergent capabilities. These models demand not just large volumes of data, but highly specialized datasets tailored to specific tasks and domains, required across their entire development lifecycle—from initial broad pre-training to subsequent narrow fine-tuning. Building these comprehensive and nuanced datasets is an arduous task, presenting significant logistical and ethical hurdles. "Securing these datasets is far from easy," Alvarado explains, noting that "practitioners face ongoing challenges securing massive, diverse, and representative datasets while also addressing bias and gaps." The pursuit of both unparalleled quantity and unimpeachable quality in data, coupled with the imperative to actively mitigate inherent biases, represents a monumental and ongoing undertaking for AI developers and the organizations funding them. This complexity underscores why data acquisition and curation are increasingly becoming strategic differentiators in the competitive AI landscape.
Related Reading
- World Models Redefine AI: Beyond Language to Simulated Reality
- GPT-5.1: OpenAI’s Leap Towards Human-Centric AI and Enterprise Efficiency
In response to these persistent challenges—particularly the difficulty of acquiring sufficient, unbiased, and private real-world data—many practitioners are now exploring synthetic data. This data, generated by LLMs themselves, offers a promising avenue for filling representational gaps, augmenting scarce real-world data, and potentially reducing privacy concerns by creating data that mimics real attributes without exposing actual individuals. However, Alvarado cautions that synthetic data is not a panacea; it introduces a new set of critical responsibilities. Crucially, "every dataset built in this way requires detailed documentation, which includes seed data, prompts to generate the data, and parameter settings." Without meticulous records of the generative process, including the source of the initial "seed data," the specific "prompts" used to guide generation, and the "parameter settings" that influenced the output, tracing the origins and transformations of synthetic data becomes exceedingly difficult. This lack of provenance can severely hinder transparency, accountability, and the ability to diagnose and rectify potential biases or inaccuracies that might be inadvertently propagated from the original models or prompts. For defense and AI analysts, this emphasizes the need for robust data governance frameworks to ensure the trustworthiness and explainability of AI systems relying on synthetic inputs.
As LLMs continue their rapid evolution, so too must the methodologies for building and maintaining their data foundations. Alvarado emphasizes three critical considerations for the future of data-driven AI: specialized datasets are paramount, scale does not guarantee diversity or quality, and dataset categories must reflect user needs and application conditions. The future of robust, ethical, and performant AI hinges not just on algorithmic advancements, but on a profound commitment to understanding and meticulously shaping the data that breathes life into these intelligent systems.

