Appraise

Site: Learn: free, high quality, CPD for the veterinary profession
Course: EBVM Learning
Book: Appraise
Printed by: Guest user
Date: Friday, 22 November 2024, 12:50 PM

1. Introduction

1. Introduction

Male vet examining a slide under the microscope

Appraising is the next step in the EBVM cycle, where you evaluate the quality of the study you are reading and its relevance to the question you have asked (or want to answer).

By the end of this section you will be able to:

  • describe the most important factors that should be appraised when you read a paper
  • explain how to appraise literature
  • use tools that support the appraisal process.

2. Why appraise?

2. Why appraise?

Scientific literature is extremely important, but not always entirely valid.

You may have heard the common phrase ‘Buyer, beware!’, but do we think this way about veterinary information? We should, particularly when it comes to the literature used to make evidence-based decisions about our patients.

Don’t believe everything you read!

Don't believe everything you read!

Some projects assessing the quality of published literature in different fields of veterinary medicine have revealed substantial deficits in reported studies, even those in reputable peer-reviewed journals (Cockcroft, 2007; Kastelic, 2006; Simoneit et al., 2011). 

You should keep this in mind when reading a paper, because you may find that conclusions formulated by authors are based on scientifically weak, if not invalid, data. Other papers may report information generated using inappropriate study designs (see Determine the level of evidence later in this section) which therefore result in questionable conclusions.

 Questions to ponder: 

What is the actual quality of the paper I am reading? Is it good enough to be able to incorporate the information into my clinical work?

Papers differ considerably, in both the relevance of information to real, practical scenarios, and the validity of presented data or results (Glasziou et al. 1998, Dean 2013). Even studies published in prestigious journals may have elements of bias, or be unreliable because of flaws in the design or conduct of the study. Study limitations are often described as part of the discussion section of a paper to aid interpretation of results, but this is not always the case. These same limitations apply to other information obtained, for example, via expert presentations, drug company leaflets, internet sources, etc. When appraising other information sources, it is important to be equally critical. Consider the origin of the information: who wrote it, and why?

Every practitioner aims to provide the best patient care, with the awareness of the importance of using diagnostic procedures and therapeutic interventions that are the most effective and that have an optimal risk:benefit ratio. In addition, as a practitioner, of course you want to be able to provide an owner with accurate information regarding the prognosis for their animal, and to take into consideration established risk factors for certain conditions in your diagnostic work-up.

In order to help you do these things in the best way possible using EBVM, this section will highlight the skills needed to appraise the quality of information available.

3. How to appraise

3. How to appraise

As a starting point, here are some tips about reading a paper. This provides a useful ‘checklist’ to review your current approach or to help you get started e.g. when establishing or joining a practice journal club.

3.1. How to read a paper

3.1 How to read a paper

The abstract and the title of the paper should provide you with an indication of what the paper is about. However, this is not always the case and they do not always reflect the content of the paper.

Most papers which you will be reading in the veterinary literature are based on the IMRaD method of reporting: Introduction, Methods, Results and Discussion.

Introduction

The introduction provides a brief review of the existing literature and explains why the author thinks their research is important.

The research question, or the aim of the research, should be clearly stated within the last paragraph.

How does this help me appraise?

You can assess whether the study answers the question which the author set out to answer, or whether the author answers something else entirely!

Methods

The methods section describes the study design and how the study was carried out, providing sufficient detail that the study could be repeated. This is the most important section to focus on during your appraisal. Ensure that the outcomes being measured are clinically useful to you in your practice and if they are not, do not be afraid to discard the paper.

How does this help me appraise?

You can decide if the study design is appropriate to the research question.

You can work through an appraisal toolkit to identify aspects of the study design. For example, was the study cohort representative of a defined population?

Results

The results section is a clearly presented and concise description of the key results found in the study. There should not be any author opinions or interpretation; it should be completely unbiased.

How does this help me appraise?

You can work through an appraisal toolkit to identify how the study was carried out. For example, you should find a description of what happened to any animals removed from the study and why.

Discussion

The author(s) review the study findings, considering the existing literature and write an account of what they think the results mean. Limitations of the study design should be included here.

How does this help me to appraise?

Consider the authors’ views but remember you should form your own opinion on the study outcomes based on the introduction, methods and results.

For more information about reading scientific papers, here are some other useful resources:

3.2. Which papers will answer your clinical question?

3.2 Which papers will answer your clinical question?

Why am I reading this paper? Is it relevant to the clinical question I am interested in?

Think back to Ask, where you structured your clinical question as a PICO, or (S)PICO. Using a (S)PICO helps you to decide whether a paper is relevant to your clinical question.

Here is an example using (S)PICO i.e. including species:

In [cats with naturally occurring chronic kidney disease] does [a renal prescription diet compared to normal diet] increase the [survival time] of affected cats?


The literature that you need to read in order to answer your clinical question should relate to cats, kidney disease and diets.

The research question being addressed by the paper is usually found in the last paragraph of the introduction.

3.3. The three steps of appraisal

3.3 The three steps of appraisal

Once you have decided that a paper will potentially answer your clinical question, there are three steps used to evaluate whether it will provide useful evidence.

Step 1: Determine the level of evidence within the paper

Step 2: Appraise the quality of the study

Step 3: Your conclusion  Is the paper of sufficient quality?

With a little practice this shouldn’t take long and will ensure that the information you gain from reading and appraising scientific literature is of sufficient quality to Apply in your clinical practice.


TIP: Read through the sections below and then try working through the steps with a sample study and a blank criteria checklist (see Critical appraisal and appraisal toolkits later in this section). Challenge a colleague to do the same and compare your findings. 

And remember – practice makes perfect!

4. Step 1: Determine the level of evidence

4. Step 1: Determine the level of evidence

There are two aspects that you need to consider in order to determine the level of evidence of your paper.

  • What is the study type (or design)?
  • Is the study design appropriate to answer my clinical question?

We will go through each of these aspects in turn.

4.1. What is the study type (or design)?

4.1 What is the study type (or design)?

When reading a paper, it is important to determine what type of study was conducted so that you can establish whether the study type is appropriate to help answer your question.

This is important because different study types are more (or less) appropriate to answer different question types. This will be covered in more detail later on, in Is the study design appropriate to answer your question?

In order to decide on the study type, you will need to look at the methods section of the paper. The author may state which study type is used, but sometimes careful reading may contradict this.

A brief description of the common study types is outlined under 'Study types and descriptions' below (adapted from Dean, 2013). There is also information on identifying study types in the RCVS Knowledge EBVM Toolkit 4 – What type of study is it? .

Study types and descriptions

Adapted from Dean (2013)

Evidence syntheses: Studies that summarise evidence

A systematic review is a defined and rigorous method of appraising, collating and summarising the information from published papers addressing a specific question. The methods used to search the literature, assess the quality, and make conclusions are explicitly stated in the methods section.

A meta-analysis is a quantitative statistical analysis (generally) conducted as part of a systematic review. The results of different clinical trials relating to a specific question are statistically analysed and summarised. By combining the data, a meta-analysis provides more robust evidence than each individual study is able to on its own.

Also referred to as Knowledge Summary, critically appraised topic (CAT), research synthesis or BestBET. An evidence summary is a standardised summary of research evidence based on a clinical question generated from a specific patient situation or problem, producing a clinical conclusion, or summary.


Intervention (experimental) studies: The researcher designs an intervention (e.g. treatment, drug therapy, surgical method, etc.)

A randomised controlled trial is an intervention study used to assess treatments or other interventions. Study subjects are randomly allocated to either the intervention group or a control group (which receives either no treatment, a placebo, the current best treatment or a comparator). As allocation of subjects is performed randomly, all other characteristics of the population should be equally distributed across the groups, thus decreasing bias. Therefore, evidence of a cause–effect relationship is more credible in these types of studies. Ideally, the study should be ‘blinded’, so that anyone involved with assessing study outcomes does not know which treatment each animal received, in order to limit conscious or unconscious bias.


Observational studies: The researcher has no influence on which animals get the intervention; they only 'observe'

A cohort study is an observational study where exposed and unexposed groups (cohorts) are followed over a defined period of time and occurrence of the outcome of interest (e.g. disease) is measured. Cohort studies can identify risk factors associated with the outcome and estimate incidence.

A case-control study is a retrospective study (occasionally prospective) comparing animals with the disease (cases) and without the disease (controls) of interest. The animals’ histories are examined to identify risk factors for the disease.

A cross-sectional study looks at a sample of the population at a single point in time, most commonly to determine the prevalence of a certain disease.

A study comparing a group of animals before and after an event, or intervention. The effect of the event, or intervention, can then be identified by comparing the data sets.

A diagnostic test validation study is used to establish the usefulness of new diagnostic tests. Animals are tested using the new diagnostic test and the current gold standard to establish the sensitivity, specificity and likelihood ratios for the new diagnostic test.


Descriptive (non-comparative) studies: Description of what is happening – case findings, report a rare occurrence, etc.

Descriptive studies cannot be used to measure risk, causation, treatment effects or prevalence of a disease.

A case series is a description of the presentation, diagnosis, treatment and/or outcome of a group of animals with the same disease. There are no disease-free animals for comparison, and any differences in management are not randomly allocated (for example, they may be due to the owners’ preferences or different protocols between centres).

A case report is a description of a single case (or small number of cases).

Expert opinion can be one individual’s opinion or part of an elicitation process based on a panel of experts used to answer a question of interest. Expert opinion may provide some evidence where no information is available (e.g. new treatment efficacy or application to a new population). However these will almost always include some form of bias, for example, the selection of the evidence included.

4.2. Is the study design appropriate to answer your question?

4.2 Is the study design appropriate to answer your question?

As we learnt in Ask, there are several question types we can pose. These questions can, in turn, be answered by a number of different study types.

The table below shows which study types provide the most robust evidence for different question types.

Table 9: Study types and levels of evidence

This table has been adapted and simplified from the Oxford Centre for Evidence-based Medicine Levels of Evidence table (2009). Find out more about the Oxford Centre for Evidence-based Medicine's Levels of Evidence tables .

Remember: For all question types, meta-analysis and systematic reviews are usually more robust than individual studies.

In your clinical decision-making, you should rely on the most robust evidence available; you need to determine the level of evidence a paper provides in answering your clinical question. You may also need to accept that the 'best available' evidence may be lower down in this table than you might prefer; there may only be a few individual case reports rather than a systematic review. But take heart – some evidence is better than none!

Remember, determining the level of evidence is only the first step in appraising your paper. Within each ‘level of evidence’, further appraisal of the study methods and reporting may reveal that the evidence is not as robust as you first thought… On the contrary, a paper which sits lower on the ‘level of evidence’ table may provide more robust evidence.

Table 10: Examples of the most robust study types for different types of clinical questions

Type of Question Example question Study type that will best answer the question*

Treatment

In [dogs with osteoarthritis], does [supplementation with glucosamine and chondroitin] compared to [no supplementation] [reduce lameness]?

Randomised controlled trial, cohort study

Prognosis and Incidence

In [flat-coated retrievers with cutaneous lymphoma], does [being a male] compared with [being a female] affect [average life expectancy]?

Cohort study

Aetiology and Risk

In [ferrets], is [general anaesthesia by triple injectable agent] compared with [general anaesthesia by induction and inhalational agent] associated with [an increased risk of death]?

Cohort study, Case-control study, Cross- sectional study

Diagnosis

In [lactating dairy cattle] does [milk ELISA] compared with [serum ELISA] have [a better sensitivity and specificity for diagnosing fascioliasis]?

Diagnostic test validation study

Prevalence

In [adult racehorses] what is the [prevalence of laryngeal neuropathy] in winter?

Cross-sectional study

*For all question types, meta-analysis and systematic reviews are more robust than individual studies.

5. Step 2: Appraise the quality of the study

5. Step 2: Appraise the quality of the study

The level of evidence (Step 1) is a good indicator of how bias-prone the study design is likely to be. 

In most instances, however, there is overlap between the quality of papers in the different levels of evidence. For example, when assessing a treatment, a well-designed cohort study may provide better evidence than a poorly designed randomised controlled trial.

From a practical point of view, when reading papers, you should focus on the major issues that determine the quality of information and decide whether you agree or disagree with aspects of the study e.g. design, information content, objectivity, overall validity and the conclusions. But first, let’s think about the statistics that might be in the paper.

5.1. What about statistics?

5.1 What about statistics?

You do not need to check, or be familiar with, every statistical procedure. 

However, being aware of the key issues relevant to each specific study design is helpful.

Statistical significance doesn’t necessarily equal biological significance.

Even if some issues around statistics are unclear, you will get a good impression of the overall quality of the paper after assessment of the other quality criteria. Most research in this field has shown that the major flaws are usually related to study design and reporting, rather than statistics.

You do not need to be a research scientist or a statistician to appraise the literature!



Statistics top tips

  • Are details of statistical methods included?
  • Look for a sample size calculation in the methods section. This should state how many animals will need to be studied in order to observe a statistical difference. More animals will be required if the difference between groups being studied is expected to be small.
  • The probability value (p-value) can be set at any level, but standard practice is to use 0.05 as the level for significance. The p-value indicates whether an outcome is likely to be real or just due to chance. In this case, a p<0.05 suggests the finding is likely to be real.
  • In the results section, does the author use the correct number of animals and the same p-value for significance that were outlined in the methods section?

5.2. Critical appraisal and appraisal toolkits

5.2 Critical appraisal and appraisal toolkits

The appraisal needs to address aspects of the study design such as sample size, enrolment and exclusion criteria, case definition, allocation, blinding, statistical methods and objectivity in the discussion of the results.

Appraising papers takes practice; the best way to do this is to find a paper that is relevant to your question, determine its level of evidence and then work through a critical appraisal toolkit for that study type.

Below are links to various critical appraisal toolkits; these provide a checklist to work through when reading a paper and appraising a specific study type. Toolkits are designed typically by study design but also by question type. Try a few and see which ones work for you.

  • RCVS Knowledge toolkits
    • Controlled trial, cross-sectional study, case-control, cohort, systematic review, qualitative study

5.3. Other sources of bias

5.3 Other sources of bias

What else could be a source of bias?

In addition to the design and quality of the study, there are a few other considerations when deciding on the reliability of a paper.

Reporting issues

As part of assessing the quality of a paper, the reader needs to evaluate how the author has reported the methods and results of their study. Careful appraisal of a paper may leave you uncertain about whether basic concepts of the study design were duly considered when planning and conducting the study, but have simply been poorly reported.

If you look through the literature, you will find articles that are biased: by poor reporting of crucial information e.g. age and medical history of the enrolled animals; by inappropriate definitions or diagnoses of diseases; or by a lack of (or inappropriate) control groups (Dean, 2013).

Poor reporting

Poor reporting reduces the transparency of research and limits the reader’s ability to critically appraise information because information that has not been included cannot possibly be appraised! The descriptions included in the paper should allow the reader to repeat the study in order to attempt to obtain an independent result. Examples of important deficits that may be found in veterinary literature are: missing information of the type of animals used in the study and how they were allocated; unclear description of diagnostic methods; and inappropriate documentation of treatments and outcome measurements.


TIP: The use of critical appraisal toolkits (covered earlier in 5.2) can assist the reader to appraise the study’s reporting methods.


Ultimately, if certain information is not given in a paper, you should regard this as not having been considered in the study design or study implementation. It is better to be safe than to be sorry when appraising literature that could inform important decisions you make about your patients!

Reporting guidelines

Reporting guidelines (for example STROBE-VET ) exist to guide authors and publishers of journals to ensure that papers are written with sufficient transparency and clarity. However, not all veterinary journals refer to reporting guidelines (Grindlay et al., 2014 reported the figure to be as low as 35%). It is important to note that reporting guidelines are different from critical appraisal toolkits, which assist readers to determine whether the evidence presented within a published paper is of good quality.

Peer review

Peer review has been the quality control process for scientific publishing for many years, purportedly ensuring that information is checked and verified by subject experts before it is formally published. This saves the reader time; the onus is not on the reader to conduct the only fundamental analysis of the quality, accuracy and validity of the content. Peer-reviewed publications from the scientific and veterinary communities are key sources of information for EBVM practitioners.

However, some limitations and possible biases of peer review have been identified (Benos et al., 2007). For example, it has been demonstrated that gender and affiliation of the authors had an impact on the review outcomes. It is important to remember that peer review is not perfect, and published peer-reviewed studies vary in quality. However, studies show that manuscripts improve considerably after the peer-review process (Goodman et al., 1994; Benos et al.,2007).

Publication bias

Traditionally, peer-reviewed scientific journals and the bibliographic databases that index them have been considered the best sources of evidence, but research into publication bias (Glanville et al., 2015) suggests that there is a need to go beyond these sources, because a significant proportion of research will not be published in peer-reviewed journals.

Publication bias occurs when researchers, or journal editors, decide to publish studies with ‘positive’ or statistically significant results (for example, showing that a treatment has a beneficial effect) but do not publish those with no ‘significant’ results (for example, when a treatment had no beneficial effect), despite it being a well-designed study. If this happens, analysis of the published results will not provide an accurate representation of current evidence.

This publication bias is perhaps particularly relevant in the field of clinical veterinary medicine, where practitioners may not be publishing their work as peer-reviewed articles, and much of the scientific data may be hidden in the so-called ‘grey’ literature (e.g. conference papers), or in practice records and case reports. For more information about finding this ‘grey’ literature, see Acquire: What sources of evidence are there?

Sponsorship bias

Finally, check who funded the study. If it is, for example, a pharmaceutical company, the study may suffer from sponsorship bias which may lead to poor reporting e.g. not all results may be presented (Wareham et al., 2017). In general, not every sponsored project provides biased data, but you should carefully consider the quality criteria if you think that the study sponsor may have a vested interest in what and how the results are reported.

Predatory journals

As mentioned in Acquire: Internet search tools, disreputable online publishers, sometimes referred to as 'predatory', have emerged in recent years. They exploit the open access model of publishing where the author pays a fee (an Article Publishing Charge or 'APC'). The disreputable publisher takes the money but fails to follow through with the peer-review and editing process that is the standard expected from a reputable scientific journal. This has led to a proliferation of freely available poor-quality research and although these would not be listed by databases, such as MEDLINE, they will be found by Google searches.

6. Step 3: Your conclusion: Is the paper of sufficient quality?

6. Step 3: Your conclusion  Is the paper of sufficient quality?

The critical appraisal conducted in Step 2 should help you decide whether the conclusions drawn from the study are valid. 

You may agree with the conclusions stated by the authors, or you may disagree with all or part of their conclusions, and may have drawn your own valid conclusions.

A poor overall evaluation does not inevitably mean that the information is completely wrong or useless, but it indicates that the risk of bias is quite high. Therefore, you should be cautious when considering implementing the findings from papers in clinical practice, especially those of questionable quality.

Quality evidence is only useful if it is relevant to your clinical question. If you are unsure whether the evidence you have found is relevant, read 'How relevant is the evidence?' in the next section Apply.

If you feel the paper is not of sufficient quality, or relevance to support your clinical decision-making, do not be afraid to discard it!

If you feel the paper does provide some valid and relevant evidence, you can move on to the next step and determine whether and how you can Apply this evidence to your decision-making process. This is a great outcome of using EBVM!

7. Quiz

7. Quiz

8. Summary

8. Summary

Learning outcomes

You should now be more familiar with how to:

  • describe the most important factors that should be appraised when you read a paper
  • explain how to appraise literature (and other information)
  • use tools that support the appraisal process.

9. References

9. References

Benos, D. J. et al. (2007) The ups and downs of peer review.  Advances in Physiology Education, 31 (2), pp. 145152

Cockcroft, P. D. (2007) Clinical reasoning and decision analysis. Veterinary Clinics of North America: Small Animal Practice, 37 (3), pp. 499520

Dean, R. (2013) How to read a paper and appraise the evidence. In Practice, 35 (5), pp. 282285

EBVM Toolkit 4 - what type of study is it? [RCVS Knowledge] [online] Available from: https://knowledge.rcvs.org.uk/document-library/ebvm-toolkit-4-what-type-of-study-is-it/  [Accessed 19 November 2020]

Glanville, J. et al. (2015) A review of the systematic review process and its applicability for use in evaluating evidence for health claims on probiotic foods in the European Union. Nutrition Journal, 14:16

Glasziou, P. et al. (1998) Applying the results of trials and systematic reviews to individual patients. ACP Journal Club, 129 (3), pp. A156

Goodman, S. N. et al. (1994) Manuscript quality before and after peer review and editing at Annals of Internal Medicine. Annals of Internal Medicine, 121 (1), pp. 1121

Greenhalgh, T. (2019) How to read a paper: the basics of evidence-based medicine and healthcare. 6th Ed. Hoboken, NJ: John Wiley & Sons

Grindlay, D. J. et al. (2014) A survey of the awareness, knowledge, policies and views of veterinary journal Editors-in-Chief on reporting guidelines for publication of research. BMC Veterinary Research, 10:10. Available  from: http://bmcvetres.biomedcentral.com/articles/10.1186/1746-6148-10-10

Guidance on scientific writing [EQUATOR Network] [online] Available from: https://www.equator-network.org/library/guidance-on-scientific-writing/ [Accessed 19 November 2020]

Holmes M.A. and Ramey D.W. (2007) An introduction to evidence-based veterinary medicine. Veterinary Clinics of North America: Equine Practice, 23 (2), pp.191-200

Holmes M. A. (2007) Evaluation of the evidence. Veterinary Clinics of North America: Small Animal Practice, 37 (3), pp. 447462

How to read a paper: a list of resources explaining how to read and interpret different kinds of papers [BMJ] [online] Available from: https://www.bmj.com/about-bmj/resources-readers/publications/how-read-paper/  [Accessed 19 November 2020]

Kastelic, J. P. (2006) Critical evaluation of scientific articles and other sources of information: an introduction to evidence-based veterinary medicine. Theriogenology, 66 (3), pp. 534542

Oxford Centre for Evidence-Based Medicine: levels of evidence (March 2009) [CEBM] [online] Available from: https://www.cebm.ox.ac.uk/resources/levels-of-evidence/oxford-centre-for-evidence-based-medicine-levels-of-evidence-march-2009  [Accessed 19 November 2020]

Sargeant, J. M. et al. (2010) The REFLECT Statement: Reporting guidelines for randomized controlled trials in livestock and food safety: explanation and elaboration. Journal of Food Protection, 73 (3), pp. 579603

Simoneit, C., Heuwieser, W. and Arlt, S. (2011) Evidence-based medicine in bovine, equine and canine reproduction: quality of current literature. Theriogenology, 76 (6), pp. 10421050

Straus, S. E. and Sackett, D. L. (1998) Using research findings in clinical practice. BMJ, 317:339

Wareham, K.J., et al. (2017) Sponsorship bias and quality of randomised controlled trials in veterinary medicine. BMC Veterinary Research, 13, 234. Available from: https://bmcvetres.biomedcentral.com/articles/10.1186/s12917-017-1146-9