Last week the billionaire and owner of X, Elon Musk, claimed the pool of human-generated data that's used to train artificial intelligence (AI) models such as ChatGPT has run out.
Musk didn't cite evidence to support this. But other leading tech industry figures have made similar claims in recent months. And earlier research indicated human-generated data would run out within two to eight years.
This is largely because humans can't create new data such as text, video and images fast enough to keep up with the speedy and enormous demands of AI models. When genuine data does run out, it will present a major problem for both developers and users of AI.
It will force tech companies to depend more heavily on data generated by AI, known as “synthetic data”. And this, in turn, could lead to the AI systems currently used by hundreds of millions of people being less accurate and reliable – and therefore, useful.
But this isn't an inevitable outcome. In fact, if used and managed carefully, synthetic data could improve AI models.
Tech companies depend on data – real or synthetic – to build, train and refine generative AI models such as ChatGPT. The quality of this data is crucial. Poor data leads to poor outputs, in the same way using low-quality ingredients in cooking can produce low-quality meals.
Real data refers to text, video and images created by humans. Companies collect it through methods such as surveys, experiments, observations or mining of websites and social media.
Real data is generally considered valuable because it includes true events and captures a wide range of scenarios and contexts. However, it isn't perfect.
For example, it can contain spelling errors and inconsistent or irrelevant content. It can also be heavily biased, which can, for example, lead to generative AI models creating images that show only men or white people in certain jobs.
This kind of data also requires a lot of time and effort to prepare. First, people collect datasets, before labelling them to make them meaningful for an AI model. They will then review and clean this data to resolve any inconsistencies, before computers filter, organise and validate it.
This process can take up to 80% of the total time investment in the development of an AI system.
But as stated above, real data is also in increasingly short supply because humans can't produce it quickly enough to feed burgeoning AI demand.
The rise of synthetic data
Synthetic data is artificially created or generated by algorithms, such as text generated by ChatGPT or an image generated by DALL-E.
In theory, synthetic data offers a cost-effective and faster solution for training AI models.
It also addresses privacy concerns and ethical issues, particularly with sensitive personal information like health data.
Importantly, unlike real data it isn't in short supply. In fact, it's unlimited.
The challenges of synthetic data
For these reasons, tech companies are increasingly turning to synthetic data to train their AI systems. Research firm Gartner estimates that by 2030, synthetic data will become the main form of data used in AI.
But although synthetic data offers promising solutions, it is not without its challenges.
A primary concerns is that AI models can “collapse” when they rely too much on synthetic data. This means they start generating so many “hallucinations” – a response that contains false information – and decline so much in quality and performance that they are unusable.
For example, AI models already struggle with spelling some words correctly. If this mistake-riddled data is used to train other models, then they too are bound to replicate the errors.
Synthetic data also carries a risk of being overly simplistic. It may be devoid of the nuanced details and diversity found in real datasets, which could result in the output of AI models trained on it also being overly simplistic and less useful.
Creating robust systems to keep AI accurate and trustworthy
To address these issues, it's essential that international bodies and organisations such as the International Organisation for Standardisation or the United Nations' International Telecommunication Union introduce robust systems for tracking and validating AI training data, and ensure the systems can be implemented globally.
AI systems can be equipped to track metadata, allowing users or systems to trace the origins and quality of any synthetic data it's been trained on. This would complement a globally standard tracking and validation system.
Humans must also maintain oversight of synthetic data throughout the training process of an AI model to ensure it is of a high quality. This oversight should include defining objectives, validating data quality, ensuring compliance with ethical standards and monitoring AI model performance.
Somewhat ironically, AI algorithms can also play a role in auditing and verifying data, ensuring the accuracy of AI-generated outputs from other models. For example, these algorithms can compare synthetic data against real data to identify any errors or discrepancy to ensure the data is consistent and accurate. So in this way, synthetic data could lead to better AI models.
The future of AI depends on high-quality data. Synthetic data will play an increasingly important role in overcoming data shortages.
However, its use must be carefully managed to maintain transparency, reduce errors and preserve privacy – ensuring synthetic data serves as a reliable supplement to real data, keeping AI systems accurate and trustworthy.
(James Jin Kang, Senior Lecturer in Computer Science, RMIT University Vietnam)
(This article is republished from The Conversation under a Creative Commons license. Read the original article)