You are probably familiar with it: the website that gives plain wrong information. The error gets replicated virally across the Web. It’s almost impossible to correct. Almost everyone accepts it as solid fact. But you know it’s nonsense, because you’ve actually studied the original source.
In years to come, researchers are going to find access to primary sources increasingly restricted. Archivists whose task is to conserve fragile documents will be reluctant to let most people near them, especially once the content has been digitised.
Photographic digitisation is often the best compromise. But it uses a lot of memory, and a poor photograph is a poor substitute for looking at the original. In any case, if the original is in an unfamiliar language or script, it may be impenetrable to the non-expert.
The alternative – simple transcription of the content – is only as good as the transcriber. Some are experts in their field, painstakingly accurate, often doing the job for the love of it. Others are casual workers with little relevant knowledge, and little incentive to do the job well, although their work may quickly go global.
As access to primary sources diminishes, researchers need the assurance of high-quality substitutes: transcriptions and transliterations by experts, good quality photographs, and so on.
But if these are at risk of being over-written or in other ways undermined by careless data practices, the genealogy world as a whole will rest on increasingly flimsy foundations.
The Genealogy Quality Code is designed to give good quality databases a fighting chance of survival in the internet jungle.