At the beginning of March, Transparency published its third evaluation of district court judges based on statistical reports. Several judges felt offended by the analysis and argued that the data alone could not provide a full picture of their work. Some have even insisted that the public has no right to perform such analysis and have called for its withdrawal. We have dealt with the judges’ objections in detail and present a text that explains why we think the evaluation makes sense, what the limits of the data are, and what the public can learn about the work of judges from it.
The evaluation of judges that we issued a couple of weeks ago has provoked a lot of reactions. There has been positive feedback, but also resolute rejections from some judges. The judges of Čadca District Court went the furthest, stating that it is illegal for the public to analyse the work of judges and demanding the analysis be immediately withdrawn.
We have been slightly surprised by these reactions, as this has already been the third evaluation of this kind since 2014 (see 2014 and 2018 editions). This time, based on previous discussions, we improved the assessment methodology by adding a new dimension – productivity to also take into account the weight of individual agendas. Moreover, we have abandoned the practice of ranking judges, as it was pointed out in the past that underlying public data does not adequately reflect various nuances. The current evaluation covers 713 judges from 54 district courts with sufficient existing data, dividing them into five performance categories based on a weighted comparison against median values for each indicator.
We have, however, taken a close look at reservations raised by the judges and, although we disagree with most of them, for the sake of constructive debate we will try to explain our approach and conclusions more clearly and comprehensibly for both the judges and the public.
How to understand the data?
To make the judges’ evaluation more understandable, the already published methodology is now also supplemented by a document where we describe the calculation procedure in even greater detail, adding a verbal description and a visualisation of the results for each judge including a comparison with the rest of the evaluated judges. Thus, the profiles of judges show how individual judges are doing in terms of assessed indicators and also whether they are performing better or worse than other evaluated judges in Slovakia. An example of such “judge profile” is shown on figures below (verbal + visual comparison).
Quality
The “Quality” component consists of one indicator – the Number of decisions upheld by a higher court. In simple terms, this indicator tells us how often the court of appeal agrees with the decision of a particular judge, and thus whether we can expect the judge to rule correctly. Understandably, no judge has all of his or her decisions, where one of the parties has appealed, upheld by a higher court. However, judges differ in their numbers of upheld decisions.
Judge Marián Kurinec from Bratislava III District Court has the lowest rate (26%) of upheld decisions among all evaluated judges. This judge has even been rated as “unsatisfactory” in an official evaluation held under the Judges and Lay Judges Act. The highest rate of upheld decisions has been achieved by Tomáš Minárik from the court in Veľký Krtíš – up to 97% of his decisions in which there was an appeal have been upheld by the relevant higher court.
Judges have objected, among other things, that our analysis does not take into account cases in which there has been no appeal. We agree, it would be great to have this type of information. But at this point we are unable to get it. The decisions taken in appeal cases that are published in the annual statistical reports we work with for this analysis do not include information stating which decisions of a particular judge they are related to or the year of their adoption. At the same time, we would miss those cases where an appeal has been lodged and has not yet been decided by a higher court. This would make the data inaccurate, and we would not be able to understand their limits either.
Efficiency
In the “Efficiency” component, two indicators are assessed – Disposition time and Clearance rate. The first one tells us approximately how long it takes the judge to adjudicate on cases, the second one tells us whether the judge is able to speed up the duration of cases or not.
As for the reservations voiced by judges, it should be again stressed with these indicators that the performance of judges largely depends on the environment in which the judge works.
If a court has long-standing problems with a large backlog of cases, the judge will simply need more time to resolve them compared to a court that does not face such problems. Nonetheless, in our view, this is an important piece of information for party litigants. Whatever the reasons for the “slowness” (e.g. a lack of appropriate working arrangements at a particular court or a large number of cases assigned to the judge after a departing colleague), the fact is that the parties to a dispute simply wait longer for a decision from that particular judge.
The situation is similar with the Clearance rate, i.e. the proportion of adjudicated cases and cases assigned to the judge during the evaluated period. If a court simply receives more cases than other courts, it can shake up the judges’ evaluation. On the other hand, the number of cases brought to Slovak courts has been stable in recent years and is rather decreasing in the longer term. There is no reason to assume that some judges are systematically disadvantaged in this indicator. If this is the case, e.g. because they are able to resolve cases on an ongoing basis, so to speak, this should be compensated for in the “Efficiency” component, because a low Clearance rate goes hand in hand with a very good Disposition time.
Productivity
In the “Productivity” component, we look at two indicators – the Judge’s weighted product and the Proportion of cases pending more than one year. The first indicator shows how many cases a judge can produce and how complex in relative terms these cases can be. The second indicator reflects the judge’s ability to manage his or her cases. This is also related to the question how many outstanding cases wait for the judge’s decision in the long term.
Judges have criticised that we do not consider the difficulty of the individual cases. We do indeed, although there are certainly some limits. However, we also know that the judges themselves are not able to take into account the complexity of cases either. We work with the best available approximation of case complexity in the various court registers, which is also used by the Department of Justice.
Judges say that we need to wait for the case-weighting analysis that the Ministry is preparing, but even that will not be able to take into account the complexity of each individual case but will instead bundle them into understandable packages. We assume the rule of thumb is that, for example, regional courts on average deal with more difficult cases than, say, small courts. But there is no data to support this assertion, and the judges do not have such data either.
How can judges be compared?
For better clarity, the profiles of judges also include pie charts that make it easier to understand how each judge fares in comparison to his or her colleagues across Slovakia. The examples of charts below illustrate that Judge A performs better than Judge B. This is particularly true for “Quality” (blue section) and “Productivity” (orange sections), but in terms of “Efficiency” (grey sections) Judge A is slightly behind Judge B.
Judge B is the same judge whose verbal assessment can be found in the figure at the beginning of this text. This verbal description again shows that the judge has achieved average results in the components “Productivity” and “Efficiency” and is below average in “Quality”. The overall rating for such a judge is thus “below average”. Judge A has scored above average in two components and average in one component – thus reaching the overall rating “excellent”.
Three pieces of good news about the Slovak judiciary
Some judges have also expressed their views that our evaluation is jeopardising the credibility of the judiciary. To tell the truth, such claims have probably surprised us the most. Our evaluation of judges is one of the few activities that also names the good points in the Slovak judiciary and identifies examples that are worth drawing inspiration from. We perceive that judges are oversensitive about making any information about their work public. Perhaps because the media is far more likely to point to the failures of the courts than to highlight what is well done or what works well. If it is true that even activities like ours erode the credibility of the judiciary, then is anyone allowed to write anything about the judiciary at all?
But we can indeed agree at this point that we could have put more emphasis on some of our findings that point to positive aspects. So let us give it a second try.
In our evaluation, 109 judges came out as “excellent”, which is 15% of the total number of evaluated judges. “Poor” rating was assigned to just under 9% of judges. On the whole, as many as 44% of judges scored above average overall, and nearly 72% of judges scored at least average. Both the fact that, for every indicator, each judge was compared to a mean value (median) and the eventual results suggest that a large proportion of judges are performing very well across the components assessed. Only less than a tenth of judges show poor results when their performance is viewed in quantitative terms.
It is also worth highlighting that the median clearance rate in Slovak courts is almost 108%. This suggests that the Slovak judiciary, as a whole, is speeding up. If the courts can adjudicate up to almost more than 10% more cases than they receive each year, that’s great news. There are surely several reasons why this is so. The fact that the number of incoming cases, as already mentioned above, has been stable for a long time and is rather declining, certainly plays a big role. It also suggests that the majority of the judiciary is well set up to be able to speed up proceedings, meaning there are enough judges, and they are appropriately deployed within the existing court structure.
Last but not least, the rate of upheld decisions is also moving in the right direction across Slovakia. When we published the first evaluation of judges in 2014, the rate of decisions that are upheld by higher courts in Slovakia was less than 60%. Today, the average among judges is 69%. This means that situations when a decision by a district court does not meet the standards set by regional courts are far less frequent. And it also means that party litigants get final decisions more quickly than in the past, because decisions are less frequently brought back to the district court following the ruling of an appellate body.
Recommendations to score better results
However, the analysis also points to systemic problems. These do not only relate to the lack of enabling arrangements at some courts, but also, for example, to the recording of data. After the evaluation had been published, several judges contacted us, complaining about inaccuracy of data recorded in statistical reports. The problem is not only about individual errors, but also about diverging methods of record keeping, for example, for succession cases, which are reported in different ways by different courts.
Thus, to allow for more beneficial use of the data all courts should get clear guidance to ensure that there is no variation in record-keeping practices between districts. It would also be beneficial if more specific data was available (e.g. on real length of proceedings).
The judges’ reservations have repeatedly argued that judges cannot be evaluated using quantitative indicators alone. We have never insisted that judges should be evaluated only on the basis of quantitative data. However, we remain convinced that the Slovak judiciary can also benefit from the practice of evaluating judges using quantitative data.
The current law provides for qualitative evaluation of judges. A cursory glance at the published results of the evaluations shows that this system is not working ideally. These evaluations include 1,839 “excellent”, 39 “good” and only 2 “unsatisfactory” ratings.
The ratio of judges who have been labelled “excellent” and “unsatisfactory” by the official evaluation committee is currently 1,839 to 2. Source – Ministry of Justice
We have all experienced school classes or work teams, and everybody knows that no team is flawless in an egalitarian way. That is just not how the world works. The results rather suggest an unwillingness to point out, at least publicly, the observed deficiencies or they possibly point to extremely low standards applied to judges in the evaluation process.
It is like the participation awards we see in marathons. However, these participation awards are given to people who go jogging in their free time and it is not their job. Let alone a job paid for by public funds. Evaluation, when done in a meaningful way, is a space for reflection and self-reflection for all involved. Meaningful assessment is not a tool to punish or bully someone, but it is a tool to help us improve. That is why it is done, both in the private sector and in academia, for example, and there is no reason for the judiciary to be an exception.
In an ideal world, the evaluation of judges would consist of at least three parts:
⦁ One would be a qualitative assessment, looking at the files, reading and assessing the judges’ decisions, as currently provided for by the law.
⦁ Another part would be a quantitative assessment of the judges’ work, which allows us to see judges in a comparative perspective and identify those who differ significantly from their colleagues.
⦁ And the third part would be an assessment of the judges’ performance from the point of view of the parties to proceedings or lawyers and prosecutors, who can assess the judges’ performance and perceived persuasiveness.
Because no matter how good a judge may be, if litigants distrust his or her decisions or leave the courtroom somehow offended by the judge’s conduct, such a judge will not improve the public’s perception of the judiciary much.
We are convinced that while Slovakia lacks the data and conditions for such a comprehensive and objective assessment, our quantitative analysis does provide some valuable feedback to the judiciary, despite all the aforementioned limitations. Moreover, it is a legitimate role of watchdog organisations like Transparency to work with data and to provide meaningful information to the public based on that data.
And that is exactly another key point of our evaluation. To give the public an idea of how long they can reasonably expect their proceedings to take and “how likely” it is to have the ruling upheld by a higher court if their case ends up on the desk of a particular judge. What matters to people is how quickly and how well their case is decided. And as the Constitutional Court has repeatedly pointed out, for people it is irrelevant if potential delays or errors at courts are partly due to circumstances such as insufficient technical equipment, sick leaves or too much work.
Samuel Spáč, author of the methodology and Chairman of the Transparency Board of Directors
Support the monitoring of Slovak courts and the evaluation of judges donating 2% of your tax: https://transparency.sk/2percenta.
Thank you!