Despite the fact that every time we have more data sources at our disposal, that its economic impact scope record numbers in the coming years and that data are more affordable than ever, the potential for reuse are still rather limited. An explanation of this phenomenon is that potential users of such data are faced many times a host of barriers that impede their access and use.
The facets in which there might be a problem of quality that would make the reuse of data are manifold: barely metadata narrative and standardized, election of a licence, the choice of format, the improper use of formats or deficiencies in the own data. Therefore, there are many initiatives aiming to measure the quality of data sets on the basis of their metadata: date and update frequency licence, formats, employees, … as e.g. of quality of metadata present in the european Portal database or quality of dimension Open Data Maturity Index .
But such analysis are insufficient, given that most times the shortcomings of quality can only be identified after starting the process of re-use. The work that pride themselves vetting processes and preparation are becoming a major burden in many cases is user inasumible open database. This creates frustration and loss of interest from the reutilizador sector in the data provided by public agencies, affecting the credibility of the institutions and significantly reducing publishing the expectations of return and generation of value from the reuse of open data.
These potential problems can be tackled because, in large measure, it has been observed that are due to the publisher could not known how to express the data correctly in the chosen format.