SDMX Users Forum

Please login or register.

Login with username, password and session length
Advanced search  


Author Topic: Relative Standard Errors (RSEs) etc  (Read 6872 times)

Al Hamilton - ABS

  • Newbie
  • *
  • Posts: 4
    • View Profile
Relative Standard Errors (RSEs) etc
« on: February 04, 2010, 01:15:42 AM »

For survey outputs, where practical the ABS prefers to make available measures of the "quality" of estimates, such as the RSE associated with that estimate, along with the estimate itself.  For some forms of visual presentation rather than showing a jumble of estimates and corresponding RSEs we just highlight (eg with an annotation) those estimates where the RSE is particularly high.  A commonly used scheme is

    an asterisk (*) where "estimate has a relative standard error of 25% to 50% and should be used with caution", and
    a double asterisk (**) where "estimate has a relative standard error greater than 50% and is considered too unreliable for general use"

Apparently some other agencies suppress data from publication where the RSE is consider too high.

For maximum flexibility it would appear sensible to carry the RSEs themselves within the SDMX (eg as an uncoded - numeric - attribute at the observation level) and based on the fact that they are RSEs (eg based on the specific concept underpinning these attributes) processing/transform logic could then convert them to annotations or do something else at the time of rendering.

There is also sometimes an argument of whether an RSE of, eg, 25% should be represented as "25" or "0.25".  Part of defining the concept would be to make a clear and consistent choice in this regard otherwise, eg, processing logic could not be applied consistently.  ("0.25" rather than 25 is favoured currently.)

The way we conceptualise the relationship between an estimate and its RSE, and the logic we want to apply, leads us away from other options such as adding a dimension that differentiates between estimates, RSEs, SEs etc.  We looked at the SAMPLING_ERR concept in the SDMX Content Oriented Guidelines.  While related, it is a more general concept.  The ESMS (European Statistical Metadata Structure) usage of SAMPLING_ERR would typically position it well above the level of an individual observation, and the fact the default representation is text is another pointer of distinction between the primary focus of the SAMPLING_ERR and the RSE concepts. 

Have any other implementers considered (and maybe chosen) an approach to RSEs and related observation level "quality" measures?       

Don McIntosh - STR

  • Newbie
  • *
  • Posts: 3
    • View Profile
    • Space-Time Research
Re: Relative Standard Errors (RSEs) etc
« Reply #1 on: August 05, 2010, 09:11:14 PM »

Al, I agree this makes a lot of sense. As you're probably aware (but for the benefit of others), in our SuperSTAR software we treat RSEs as generic SDMX annotations but it would be very useful as you say to have a more specific observation level quality measures. As a vendor, we do find it difficult in some places to come up with the right, generic use of SDMX because sometimes we find that we need to make our own decisions about how what we would consider fairly fundamental metadata should be recorded in this format.

This is an old post - anyone got any more recent info on this they would care to share?