Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quality indicators #45

Closed
merangelik opened this issue May 15, 2024 · 12 comments
Closed

Quality indicators #45

merangelik opened this issue May 15, 2024 · 12 comments

Comments

@merangelik
Copy link

Hi,

can I access the results used for the quality report without running the cruncher?

I am particularly interested in the "JD+ global quality assessment" indicator. Most of the remaining indicators I can derive from the output myself.

Btw, I am removing this issue from
rjdverse/rjd3workspace#21 (comment)
because I think it doesn't fit there anymore.

@merangelik
Copy link
Author

merangelik commented May 28, 2024

thenamiracleoccurs

Figure: http://training.burghhouse.com/managingsuccessfulprojects.htm

@TanguyBarthelemy
Copy link
Contributor

I don't think it's possible.
But I don't want to say anything wrong so I will let @annasmyk confirm on this.

@merangelik
Copy link
Author

Thanks Tanguy, do you know of a way to compute this indicator myself?

@AQLT
Copy link
Contributor

AQLT commented May 29, 2024

I also don't think it is currently exportable and if it is easy to modify the corresponding Java code. A solution (and maybe faster solution) could be to reimplement it in R defining your own thresholds and indicators. You can find the thresholds in JDemetra+ (in Windows Tools > Options > SA > X13 (for example) and double click on a diagnostic). The reference manual 2.2 indicates the following:

The rule for the calculation of the Summary indicator as well as other aggregated indicators, which
combine n qualitative indicators, is given in Table 5.8. To calculate the average of the (defined)
diagnostics, 0 is assigned to Bad, 2 is assigned to Uncertain and 3 is assigned to Good.

And then :

  • Undefined All of n qualitative indicators are Undefined.
  • Error The value of at least one of n qualitative indicators is Error.
  • Severe None of the n qualitative indicators is Error and at least one of them is Severe.
  • Bad None of the n qualitative indicators is Error or Severe. The average of the (defined) diagnostics is less than 1.5.
  • Uncertain None of the n qualitative indicators is Error or Severe. The average of the (defined) diagnostics is in the range [1.5, 2.5[
  • Good None of the n qualitative indicators is Error or Severe. The average of the (defined) diagnostics is at least 2.5.

@merangelik
Copy link
Author

Thanks Alain,

I know this description in the reference manual and it didn't really help me last time to replicate the JD+ quality indicator because it is very vague.
If I remember correctly, I couldn't find out which underlying n qualitative indicators were actually used. But I'll have a look at the GUI more closely if you think that might help in tracking down the number n and what it refers to.

@AQLT
Copy link
Contributor

AQLT commented May 29, 2024

I think they are the tests shown in the Main results panel

@annasmyk
Copy link
Member

Hi,
we've been thinking about the opportunity to export theses indicators
they make sense only in regard to the thresholds which are implemented in YOUR GUI...that's a point to keep in mind
anna

@AQLT
Copy link
Contributor

AQLT commented Jun 27, 2024

In theory I would agree with you @annasmyk, but in practice few people actually know the thresholds and even fewer change the thresholds. Also, this indicator was exportable in V2 (so this is a regression), I think a lot of people look at it since it is highlighted in the interface, so it would be weird not to be able to export it. Changing the thresholds seems to have an effect on the "summary" statistic of the main results, but not on the "quality" diagnostic written when you select a SA processing. It doesn't seem to be exportable in v3 and in the cruncher of v3 (to be verified), so maybe @merangelik you should open an issue in https://github.com/jdemetra/jdplus-main?

@annasmyk
Copy link
Member

so the conclusion from my previous message is : it's doesn't necessarily make sense to make these indicators available in R, where your GUI parameters are unknown (unless always default)

..and I put a more detailed document about the computation here
https://github.com/stace-tsa-shop/JDemetra-Auxiliary-documentation

no worries I invited you to the stace shop organization
I hereby close the issue.

@AQLT
Copy link
Contributor

AQLT commented Jun 27, 2024

I don't agree with you on this. For example, the description of rjd3toolkit is "Utility package for R access to JDemetra+ version 3.x algorithms", so the packages are related to what is present in the GUI and this statistic is calculated in "JDemetra+ version 3.x algorithms". Also, the "quality" diagnostic is independent of the GUI parameters, and this was exportable in v2, so it was considered important: since this diagnostic is important, this may be an obstacle for those who want to upgrade to v3.
@annasmyk, since this issue is public and the document you mentioned is in a private repository, my suggestion would be to add your document to this issue.

@TanguyBarthelemy
Copy link
Contributor

Or simply make the repo public as it is a documentation repo, that makes sense?

@palatej
Copy link
Contributor

palatej commented Jun 28, 2024

I would like to add some precisions on this rather complex topic

The GUI and R are just tools to access the core libraries. It means that they just give partial views of the core algorithms. Those views are not necessary identical; the packages are usually more general than the GUI (for instance, you can make canonical decomposition of models with any periodicity). The current limitations of the packages in comparison with the GUI are just a question of time or of opportunities.
As mentioned by Anna, the sa diagnostics (in the diagnostics panel of the GUI) depend on parameters that can be modified through the Tools->Options->SA panel. Moreover, the "summary" corresponds exactly to the "Quality" column in the multi-processing window.
The GUI contains for the time being a small incoherence on that point: for performance issues, changing some parameters in the settings of the diagnostics will not impact the multi-processing windows: you have to recompute them (for instance: reset + run). However, the details of the diagnostics of a specific series will be directly recomputed because they are dynamic.

Now, concerning the "summary", I just added the code to export it (in the cruncher and in R). That will be pushed next week.
Direct access to the settings of the diagnostics from R will also be provided in the future. Such settings will be independent of what you have in the GUI (and not persistent). More information on the different diagnostics will also be available; I just need to find the best approach (performances, flexibility...)

Finally, I agree with Alain: you have all the tools to derive yourself your quality report. That's the solution I would prefer, by far

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants