The Two Tier Medicare Rebate Divide – Are the Two Tiers Necessary, Where is the Value?
Compiled by: Dr Clive Jones Dipt, DipCouns, DipLC, BEd, MEd, GradDipPsych, PhD(psych), MAPS, FCCOUNP, FCSEP and Peter Pacey BSc(Hons) Assoc MAPS.
When the Australian Government introduced Medicare Better Access to Mental Health Care Treatment in November of 2006 an unfortunate misconception was also introduced that continues to perpetuate within the Australian National Mental Health Care System. Specifically, this misconception incorrectly infers that clinical psychologists hold a greater level of expertise and quality of service in the assessment and treatment of mental health concerns compared to all other psychologists practicing in the same field.
The ramifications of this misconception over the difference in quality of service between two distinct psychology groups are quite profound and includes:
- A large discrepancy in the rebate for consumers obtaining the same psychological service that produces the same treatment outcome,
- Unfounded systemic bias in remuneration for psychologists,
- Inappropriate restrictions in the scope of practice for most psychologists in Australia today that ultimately impacts on community access to psychological services particularly in rural and remote areas,
- Major shifts in both public and professional perceptions of the psychological expertise of some psychologists over others,
- A widespread discrediting of Australian psychologists and the training they have undertaken.
This brief report aims to draw attention to key research by offering a summary of its findings that has specifically addressed these misconceptions.
Research commissioned by the Australian Government1 and additional post-hoc analysis of this research2 demonstrates clearly that there is no difference in treatment outcomes when comparing clinical psychologists treating under tier one of Medicare Better Access with the treatment outcomes of all other registered psychologists treating under tier two of Medicare Better Access.
The research commissioned by the Australian Government1 provides a comprehensive overview of:
- Pre/post measures and mean group differences derived from the K-10 and the three (3) subscales of the DASS (i.e., Depression, Anxiety & Stress)
- Comparisons across clinical, generalist and GP treatment groups, and
- Comparisons between mild, moderate and severe pre- and post-treatment client groups.
Ultimately the findings of this research show:
- All groups (i.e., clinical psychologists, generalist psychologists and GP’s) produced symptom reduction (as measured by the K-10 & DASS) post treatment
- The psychologist group combined (i.e., clinical and generalist) produced greater symptom reduction post treatment compared to the GP group
- There was no difference in post treatment measures between the clinical psychology group and the generalist psychology group
- Pre- and post-measures and mean group differences derived from the K-10 and the three (3) subscales of the DASS (i.e., Depression, Anxiety & Stress) showed equivalent statistically significant post treatment change between the top tier and lower tiered psychologist that was seen across mild, moderate and severe symptoms.
Post Hoc analysis (i.e., additional analysis after the study is complete)
Of the data from the 2011 Government Commissioned study was conducted by Prof Mark Anderson2 in 2016 to provide further clarity on outcome differences between the clinical psychology group and generalist psychology group treating under Medicare Better Access.
The post hoc analysis applied by Prof Anderson utilised the ‘Cohens d’ which is used to directly compare the effect size difference pre/post treatment.
The Cohen’s d offers an effect size rating as outlined below:
- Small effect size between pre/post treatment: d=0.20
- Medium effect size between pre/post treatment: d=0.50
- Large effect size between pre/post treatment: d=0.80
- Very large effect size between pre/post treatment: d=1.20
Professor Mark Anderson explains (email correspondence: 15th December 2016):
“the main article did not go far enough in its analysis of differences (or no differences) between groups, but with the stats supplied in the article I was able to calculate the repeated-measures effect sizes (Cohen’s d) for the pre- to post-measures for the reduction in scores for K-10 and the three DASS subscales for clinical and general psychologists. That calculation is done by taking the mean difference values (column in the Table 5 on page 734 labelled “Mean difference (SD)”) and dividing them by the SD of the difference scores (also supplied in that column). All the Cohen’s d values are, by the conventional standards of psychological research, in the large (d=.80) to very large (d=1.20) “range”.
Professor Mark Anderson offers a conclusion to the post hoc analysis of the findings (email correspondence: 15th December 2016):
“Effect sizes are much more meaningful and interpretable than p values and tests of statistical significance. The Ns (sample population) here are quite large, so one can be fairly confident in the reliability of the resulting effect sizes.” … “What one can see quite clearly is that the changes in scores on these four measures do not differ in any meaningful way between clinical and general psychologists.”
“We don’t need any further statistical test to come to the conclusion that there are no meaningful differences in outcomes on these measures between clinical and general psychologists.”
In summary the post-hoc analysis results found that all effect size results across the clinical and general registered psychology groups were large to very large with Cohen’s d ranging between 0.82 through to 1.20 demonstrating the power of the results and the significance of the findings. Ultimately this data confirms quite clearly that both psychology groups produced the same large to very large effect size across mild, moderate and severe caseloads.
There is no difference in outcomes as measured by the K-10 and DASS pre/post treatment when comparing the tier one clinical psychology group and tier two other registered psychology group treating mild, moderate and severe cases under Medicare Better Access.
Ultimately, we must accept and openly promote the fact that all psychologists across both tiers of Medicare are equally effective in treating mental health concerns.
1 Pirkis, J., Ftanou, M., Williamson, M., Machlin, A., Spittal, M. J., Bassilios, B., & Harris, M. (2011). Australia’s better access initiative: An evaluation. Australian and New Zealand Journal of Psychiatry, 45, 726-739.
2 Professor of Psychology Mark Anderson Halmstad University, Sweden, (2016). Post Hoc Analysis of research on Australia’s better access initiative.
Responding to the Naysayers
The Better Access evaluation research5 has not been without its critics1&2. In particular strong criticism has been forthcoming from Hickie et al1. The main arguments against the evaluation methodology are that it was not a randomised control trial (RCT), that consumers used in the evaluation were recruited by providers, and that the sample size was not sufficiently large or representative. The critics speak exceptionally strongly in suggesting that funding for ALL psychologists (clinical and others) under Medicare Better Access is a waste. Such a strong voice may suggest another ‘funding’ agenda, especially given the rebuttals below3&4.
Jorm3 states, “Better Access provides funding for the services of both clinical psychologists and general psychologists without specialized clinical training. The decision to include general psychologists in the scheme has been controversial, with arguments that they are not adequately trained for the task, and has led to a rift between some leading clinical psychologists and the Australian Psychological Society, which represents all psychologists . Because general psychologists greatly outnumber clinical psychologists, their inclusion in the scheme has been seen as one of the reasons for the cost blow-out . Indeed, in 2009, general psychologists provided around double the number of Better Access services as clinical psychologists . However, do they produce different patient outcomes? The evaluation by Pirkis and colleagues  provides data on symptom scores pre- and post-treatment for clinical psychologists, general psychologists and GPs. From these data it is possible to calculate uncontrolled (pre- versus post-therapy) effect sizes. The standardized mean change score was 1.31 for clinical psychologists, 1.46 for general psychologists and 0.97 for GPs. The effect sizes for the two groups of psychologists are similar and are comparable to the mean uncontrolled effect size of 1.29 reported in a meta-analysis of psychological therapies in routine clinical settings . On the data available, it appears that general psychologists produce equivalent outcomes to clinical psychologists and perhaps better average outcomes than GPs.”
In the above statement Jorm (2011) mentions an international comparison treatment effect size of 1.29 from a UK based meta-analysis study. This demonstrates very clearly that the outcome effect size of Australian Psychologists treating under tier one and tier two of Medicare Better Access both have equivalent international standing in the effect size of their treatment outcomes when compared to those found in the meta-analysis study from the UK.
Jorm3 in his published critique of the original research did his own post hoc effect size comparisons. He not only compared the clinical psychologist group with the general psychologist group. He compared each tiered group with other international effect size data. He did this because this is exactly what post-hoc analysis through the Cohen’s d is for – to compare different groups that were not originally set up to be compared. That’s how we achieve post-hoc meta-analysis data derived from different groups from completely different studies.
Jorm in his critique confirms that there is NO evidence to suggest psychologists on the lower tier are below par on their capacity to treat. To the contrary he suggests that the evidence shows that psychologists on both the higher and lower tier have equivalent outcomes that are of an international standard.
Pirkis et al4 have also rebutted the issues raised by Hickie et al1. In their conclusion they question the motivations of Hickie and others:
“We recognize that no matter how vociferously we defend our methodology, we will not convince Better Access naysayers, who think the money could be better spent elsewhere, that it achieves positive outcomes. In arguing for alternative programmes, these critics often adopt inconsistent ‘levels of evidence’ to justify the alternative expenditure. For example, Hickie et al.  advocate for ATAPS and headspace but appear to apply different thresholds for evidence of effectiveness of these programmes. We have been evaluating ATAPS since 2003, using a very similar methodology to that of our Better Access evaluation but collecting data in a more routine, less controlled way . We have demonstrated that ATAPS achieves positive outcomes , and Hickie et al.  describe it as ‘outcomes-enhancing’ and ‘evidence-based’. Presumably they consider our ATAPS evaluation sufﬁciently robust to say this, despite calling our Better Access evaluation ‘sub-standard’. An independent evaluation of headspace was conducted early in the piece, but it provided limited assessment of outcomes . Hickie et al.  cite their own studies [17,18] as evidence of the ‘early successes of alternative primary care pathways … delivered through headspace’, but again these studies do not provide systematic evidence of outcomes. ATAPS and headspace may well be worthy contenders for a bigger share of the ﬁnite mental health dollar, but the evidence on which they are based should be subject to the same critical analysis as the evidence from our evaluation of Better Access.” (p.913).
Finally here are two crucially important quotes to note from the original evaluation done by Pirkis et al5:
- “Patients who received care from clinical psychologists and registered psychologists showed shifts from moderate or severe levels of depression, anxiety and stress to having normal or mild levels of these conditions (as assessed by the DASS-21). These outcomes are of a similar level of magnitude to those experienced by patients who receive care from psychologists through the Access to Allied Psychological Services component of the Better Outcomes in Mental Health Care programme , and to those experienced by patients who receive care through the virtual clinic operated by the Clinical Research Unit for Anxiety and Depression (CRUfAD) [Andrews G: personal communication]. They also correspond with the sorts of effects seen by major primary mental health care programmes overseas, such as the Improving Access to Psychological Therapies (IAPT) initiative in the UK .” (p.737)
The above statement highlights that all psychologists treating under BOTH tiers of Medicare Better Access obtain outcomes that are not only equivalent when compared to each other but are also equivalent when compared to both national and international standards in the context of the comparisons of outcomes derived through other treatment initiatives across Australia and overseas.
- “These patients’ mental health status improves markedly during the course of their care; their symptoms reduce and their psychological distress diminishes. These achievements should not be under-estimated.”(p. 738).
To return to our opening question, “Are the two tiers necessary, where is the value?”
We have been unable to find any empirical evidence that supports a two tier system on the basis of clinical psychologists being able to produce better outcomes for their clients than all other psychologists. If we use ‘treatment outcomes’ as our criterion for determining value we must conclude that the two tiers are not necessary and that the top tier does not add value.
1 Hickie, I. B., Rosenberg, S., & Davenport, T. A. (2011). Australia’s Better Access Initiative: still awaiting serious evaluation? Australian and New Zealand Journal of Psychiatry, 45, 814-823.
2 Allen, N. B., & Jackson, H., J. (2011). What kind of evidence do we need for evidence-based mental heath policy? The case of the Better Access Initiative. Australian and New Zealand Journal of Psychiatry, 45, 696-699.
3 Jorm, A. F. (2011). Australia’s Better Access initiative: Do the evaluation data support the critics? Australian and New Zealand Journal of Psychiatry, 45, 700-704.
4 Pirkis, J., Harris, M., Ftanou, M., & Williamson, M. (2011). Not letting the ideal be the enemy of the good: the case of the Better Access evaluation. Australian and New Zealand Journal of Psychiatry, 45, 911-914.
5 Pirkis, J., Ftanou, M., Williamson, M., Machlin, A., Spittal, M. J., Bassilios, B., & Harris, M. (2011). Australia’s better access initiative: An evaluation. Australian and New Zealand Journal of Psychiatry, 45, 726-739.
How You Can Help
Register for our Forum
Join our community forum and collaborate with others. Our community forum requires completion of an online registration form. We offer further resources here but,…Read More
Subscribe to our newsletters and receive the latest information about our campaigns. We send the latest weekly posts on our blog each Monday morning to…Read More
Make a Donation
Support our ongoing effort to unite the psychology profession. RAPS has ongoing expenses e.g. Hosting a Website including yearly renewal of a SSL security…Read More