High data quality is crucial for the creation of trustworthy, relevant and meaningful information. After all, complete datasets that contain correctly filled out answers make more sense than datasets full of errors.
MRDM actively enhances data quality in a number of ways:
We aim to maximally use data that directly derives from primary sources. This saves administration time and limits errors. We offer services to automatically deliver data from source systems to the MRDM platform.
Let us do the calculations. It saves administration time, and limits errors.
Duration of stay, for example, is an important parameter in many studies and audits. We do not ask how long the patient was hospitalised, but ask to submit the hospitalisation and release dates, after which we will calculate the duration of stay. This approach works for many variables.
We immediately report on potential errors, missing values or unlikely answers by means of alerts or error reports. This facilitates error corrections, and increases data quality.
The engine is not only based on technical validations (e.g. a person is reported to be over 200 years old) but also on medical validations. The validation rules for every project are discussed and confirmed with help of statistical and medical specialists.
The validations also take into account the relations between questions, also in hindsight. For example: A woman is reported to be 32 and pregnant. When her age is later corrected to 75, the unlikeliness of her pregnancy will also be flagged. This facilitates corrections.
Our documentation and systems include explanations of variables, for further clarification on what is meant by a question. This limits misunderstanding and ambiguity.
We perform independent verification services to compare source data with data that is submitted to the system, to identify discrepancies. We do this through site visits, or by re-using medical-administrative data.
Regardless of the method of data delivery (manual, batch or connectivity service), all our validation engines, dataset rules and updates are fully synchronized and centrally managed. This ensures uniformity.