Part 2: Evidence Evaluation and Guidelines Development

2025 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care
Expand All
1Abstract

Abstract

The 2025 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care is based on the extensive evidence evaluation performed in conjunction with the International Liaison Committee on Resuscitation. The Adult Basic and Advanced Life Support, Pediatric Basic and Advanced Life Support, Neonatal Resuscitation, Resuscitation Education Science, Special Circumstances, Post–Cardiac Arrest Care, Ethics, and Systems of Care Writing Groups drafted, reviewed, and approved recommendations, assigning to each recommendation a Class of Recommendation (ie, strength) and Level of Evidence (ie, quality).

The 2025 Guidelines are organized in knowledge chunks that are grouped into discrete modules of information on specific topics or management issues. Each chapter of the 2025 Guidelines underwent blinded peer review by subject matter experts and was also reviewed and approved for publication by the American Heart Association Science Advisory and Coordinating Committee and the American Heart Association Executive Committee. Chapters with pediatric content (Neonatal Resuscitation, Pediatric Basic and Advanced Life Support) were also co-led by the American Academy of Pediatrics, and thereby the content was reviewed and approved by the American Academy of Pediatrics Board of Directors. The American Heart Association has rigorous conflict of interest policies and procedures to minimize the risk of bias or improper influence during development of the guidelines. Anyone involved in any part of the guideline development process disclosed all commercial relationships and other potential conflicts of interest.

2Introduction

Introduction

This section describes the process of creating the 2025 American Heart Association (AHA) Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care (ECC), including the process of evidence evaluation and the format of the guideline document; the formation of the AHA and AHA/American Academy of Pediatrics (AAP) Writing Groups; the guideline development, review, and approval process; and the management of potential conflicts of interest.

3Methodology and Evidence Review

Methodology and Evidence Review

The 2025 Guidelines are designed to present a comprehensive compilation of guidance for cardiopulmonary resuscitation and ECC. These guidelines, including those for adult basic and advanced life support, pediatric basic and advanced life support, neonatal resuscitation, resuscitation education science, special circumstances, post–cardiac arrest care, ethics, and systems of care, are based on an evidence evaluation process conducted by the International Liaison Committee on Resuscitation (ILCOR) published as the 2025 International Liaison Committee on Resuscitation Consensus on Science With Treatment Recommendations (CoSTR),1 and an independent evidence review process conducted by the 2025 Guideline writing groups.

The ILCOR Scientific Advisory Committee, consisting of experts in evidence review, created a methodological governance process for evidence evaluation used to develop the 2025 CoSTR.1 This process produced 3 types of evidence reviews: systematic reviews, scoping reviews, and evidence updates. New for the 2025 Guidelines, the AHA also developed an internal evidence review process to address specific content areas not reviewed by the ILCOR evidence review process that were deemed important to users of the AHA guidelines.

3.1ILCOR Systematic Review

ILCOR Systematic Review

Systematic reviews were conducted by ILCOR task force members with expertise in systematic reviews and the content area following standardized procedures.2 The methodology was based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses3 and the approach proposed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group.4 The overall process was coordinated by the Scientific Advisory Committee of ILCOR and guided by the task force Scientific Advisory Committee representatives.

In brief, each ILCOR task force identified and prioritized questions to be addressed by using the PICOST (population, intervention, comparator, outcome, study design, time frame) format.5 A detailed search for relevant publications was performed on MEDLINE, Embase, and Cochrane Library databases, with identified publications screened for further evaluation. Two systematic reviewers conducted a risk-of-bias assessment for each relevant study by using Cochrane and GRADE criteria for randomized controlled trials,6 Quality Assessment of Diagnostic Accuracy Studies, version 2, for studies of diagnostic accuracy,7 and GRADE criteria for observational and interventional studies informing therapy or prognosis questions.4 In addition to assessing scientific bias, the Cochrane risk-of-bias tool also considers both the source of funding and potential conflicts of interest of authors of the study. The reviewers created evidence profile tables containing information on all study outcomes. Using the GRADE approach,8 the certainty of evidence (ie, confidence in the estimate of the effect) was categorized as high, moderate, low, or very low on the basis of the study methodologies. The certainty of evidence was upgraded for a dose-response or large effect, and downgraded for bias, inconsistency, indirectness, imprecision, or publication bias (Tables 1 and 2).9 Any unresolved disparity between reviewer assessments was resolved through discussions and consensus with the task force representative of the Scientific Advisory Committee and, if disagreement remained, by the larger task force.

The ILCOR task forces reviewed, discussed, and debated the studies and systematic review analyses, drafting a consensus on science statement and a written summary of identified evidence and evidence certainty for each outcome. When there was consensus, the task force developed consensus treatment recommendations, labeled as strong or weak and either for or against a therapy, prognostic tool, or diagnostic test, noting the certainty of the evidence. In addition, each topic summary included the PICOST question and a justification and evidence-to-decision framework section, capturing the values and preferences considered by the task force as well as a list of knowledge gaps. Public input was sought for the draft CoSTR statements.10 The task forces considered all public comments when finalizing the CoSTR statements. A task force insights approach was used rather than a formal CoSTR when there were insufficient data to support a recommendation. All CoSTR statements from 2021 to 2025 underwent peer review by at least 5 subject matter experts and were endorsed by the ILCOR board before publication.

3.2ILCOR Scoping Review

ILCOR Scoping Review

The purpose of a scoping review is to provide an overview of the extent, range, and nature of research activity; to clarify key concepts; to identify gaps; or to identify topics for future systematic reviews.11 One difference between scoping reviews and systematic reviews is that scoping reviews attempt to provide an overview of a broad topic with a large and diverse body of literature, and to produce a summary of the studies, but not a magnitude of effect. Systematic reviews address a narrow, more clearly focused research question. Scoping reviews are helpful in examining emerging areas of research, clarifying key concepts, and identifying gaps in knowledge. Additionally, scoping reviews do not include formal evaluation of the certainty of evidence or risk of bias and are not assessed according to GRADE recommendations. Due to these differences, unlike the treatment recommendations that can arise from a systematic review, scoping reviews cannot result in CoSTR without an additional systematic review. Scoping reviews can, however, result in the development of good practice statements. Although ILCOR does not create or alter treatment recommendations without a systematic review, if the topic of a scoping review is thought to be of particular interest to the resuscitation community, good practice statements are often made. Good practice statements represent expert opinion informed by the very limited evidence available.12

The methodology for conducting a scoping review was based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews.3,13,14 Each task force identified questions to be reviewed, presented in the PICOST or population, concept, context format. MEDLINE, Embase, and Cochrane databases were searched to identify relevant publications, with additional databases searched where appropriate. Those performing the scoping reviews extracted data to create summary tables. The task force then reviewed the studies and the evidence tables, developing a consensus narrative summary of the evidence and an overview of the task force insights including knowledge gaps and research priorities. Each topic narrative summary and overview of task force insights (including why the topic was reviewed, narrative evidence summary, and a narrative report of the task force discussions), as well as the complete scoping review, were posted on the ILCOR website for public review and input,13 with final versions included in the appendix and summarized in the body of the relevant task force CoSTR publication. When ILCOR task forces believed there was a need to make a clinical recommendation after performing a scoping review or evidence update (because no treatment recommendations can be made without a full systematic review under ILCOR policy), task forces were permitted to make expert consensus-based good practice statements. These statements were designed to reflect what experts in the resuscitation science community believe are the current best practices when sufficient data are not yet available to support such conclusions. In such cases, ILCOR expects that such statements can be replaced with formal treatment recommendations through the CoSTR process in the future as more research becomes available.

3.3ILCOR Evidence Update

ILCOR Evidence Update

ILCOR evidence updates involve a systematic way of searching the literature and updating an overview of the evidence available on a particular PICOST topic of interest for ILCOR.15 The evidence updates may be done to evaluate the existence of new research related to an already commissioned PICOST or may be done as preliminary work related to a newly developed PICOST topic. The evidence update reviewers used PubMed to conduct searches of English language publications indexed in the MEDLINE database. When the search strategies from previous reviews were available, these were repeated. Searching beyond the MEDLINE database was optional and at the discretion of the reviewer. Reviewers identified relevant new studies, guidelines, and systematic reviews, and completed an evidence update worksheet, which included the research question, the search strategy, and a table summarizing any new evidence.13 After review by the ILCOR Science Advisory Committee Chair, the evidence update worksheet was included in the relevant 2025 CoSTR task force publication appendix and cited within the body of the manuscript. As mentioned, a good practice statement could be added by an ILCOR task force after an evidence update was performed.

3.4AHA Evidence Evaluation

AHA Evidence Evaluation

An internal AHA evidence evaluation process was used to support recommendation updates or the development of new recommendations if applicable. These reviews are specifically conducted when current ILCOR evidence synthesis outputs (eg, systematic review, scoping review, or evidence update) are not available. Evidence reviewers followed a standard operating procedure for evidence review developed by the AHA ECC Committee based on systematic review methodology and supported by designated information specialists with expertise in systematic literature review methodology.4,15 The process was designed to provide a comprehensive synthesis of all available evidence on a topic to support the development of specific guidelines. It was designed to ensure rigor and feasibility, providing guidance and support for writing group members to conduct the searches and evidence synthesis for submission to the writing group chair and section authors.

Where possible, evidence reviews were conducted by 2 independent reviewers assigned by the writing group chair. Reviewers defined the elements of a PICOST research question if the clinical area under review allowed for this. The determination of these elements was set a priori to guide the development of the search strategy in collaboration with the AHA librarians, to assist in the development of inclusion and exclusion criteria for article selection, and to guide data abstraction when relevant articles were identified. If the PICOST approach was not relevant for a particular area, reviewers defined the clinical area of interest, the specific terms used to define that area, and as many of the elements of the PICOST as possible. Reviewers codeveloped search strategies with the AHA librarians. Depending on the topic area and availability of published data, literature searches were limited to randomized clinical trials, or expanded to include nonrandomized trials, observational studies, or case studies. Literature searching included the following online databases: MEDLINE, PubMed, Embase, and the Cochrane Library. For guideline updates, searches were limited to the time period following the last search date reported in the prior version of the guideline. In the case of a new guideline or full revision, no time limits on searches were imposed, unless the writing group determined that a different time frame was appropriate. The 2 reviewers then independently reviewed citations for inclusion, and evaluated included studies for risk of bias using the Robins-I tool for nonrandomized studies16 and the Cochrane risk-of-bias tool for randomized control trial designs.6 Disagreements in study selection, risk-of-bias evaluation, and data abstraction were resolved through consensus building between the 2 reviewers. When consensus could not be reached, a third reviewer was used to resolve the disagreement.

The output from this process, including data abstracted from each study relevant to the PICOST question, was documented in a standardized evidence review form, and then returned to the writing group chair for distribution to the members. Reviewers used their review of the literature, with risk-of-bias assessments considered, to propose a summary of the evidence and to suggest either no change to prior recommendations, to suggest modifications to existing recommendations, or to draft entirely new recommendations. The results of these reviews by the Writing Groups were under embargo and therefore were not posted for public comment.

3.5Adolopment of External Guidelines

Adolopment of External Guidelines

One opportunity to enhance and streamline the 2025 guideline process has been the ability to integrate external systematic reviews and guidelines into the AHA guideline process. This can be in the form of adopting existing recommendations or developing recommendations based on the available evidence synthesis termed adolopment. An adolopment evaluation process was used when a writing group identified a need for guidance on a particular topic, but existing guidelines or systematic reviews from external organizations, not directly affiliated with ECC, already provided valuable, high-quality information.17 Instead of creating new guidelines, adolopment allowed writing groups to leverage this information, provided the external guideline fit specific criteria. When a writing group identified the need for adolopment of external publications, there was first an initial comprehensive search of existing AHA guidelines (eg, AHA/American College of Cardiology, AHA/American Stroke Association) to ensure no conflicts existed between the adopted adolopment and current AHA recommendations. The writing group then presented their justification for adolopment along with their AHA guideline search results to the Science Subcommittee leadership [Chair (MD), Vice Chair (JB) and Immediate Past Chair (ARP)] and the AHA Vice President Science and Innovation (CS) for approval. Once approved, the writing group conducted a thorough assessment of the external guideline’s methodology to ensure rigor. Key elements such as the search strategy, inclusion/exclusion criteria, data extraction, bias assessments, GRADE evidence profile tables, and meta-analyses were evaluated. Writing groups were permitted to incorporate guidelines from other organizations, even if their development process was less rigorous (eg, did not use multiple databases), provided their evidence review process was transparent. These requests were evaluated on a case-by-case basis, with input from Science Subcommittee leadership and AHA staff. Alternatively, writing groups could choose to conduct an evidence review that included these external guidelines and their recommendations, using them as part of the broader body of evidence to inform recommendations. With this approach, rather than adolopment of external guidelines, external guidelines could be used to contribute to the overall evidence base for guideline development by the writing groups.

4Guideline Format

Guideline Format

Similar to the 2020 Guidelines, the 2025 Guidelines are organized in knowledge chunks, grouped into discrete modules of information on specific topics or management issues.18 Each modular knowledge chunk includes a table of recommendations, a synopsis, recommendation-specific supportive text, and, when appropriate, figures, flow diagrams of algorithms, and additional tables. Hyperlinked references are provided to facilitate quick access and review.

With the addition of “Part 3: Ethics” in the 2025 Guidelines, a narrative review format was also used to allow for detailed discussion of complicated ethical issues. The science review for ethics content followed a more traditional literature review process without development of a standard knowledge chunk format that resulted in recommendations, synopses, and recommendation-specific supporting text found elsewhere in the 2025 Guidelines. After a review of the literature by 2 or more writing group members, it was then reviewed and refined by all writing group members. There was no generation of recommendations with Class of Recommendation and Level of Evidence as found in other chapters.

5Formation of the Guideline Writing Groups

Formation of the Guideline Writing Groups

The AHA strives to ensure that each guideline writing group includes requisite expertise and diversity, representative of the broader medical community by selecting experts from a wide array of scientific backgrounds, geographic regions of North America, sexes, races, ethnicities, intellectual perspectives, experience levels, and scopes of clinical practice. Volunteers with an interest and recognized expertise in resuscitation are nominated by the writing group chair, selected by the AHA ECC Committee, and approved by the AHA Manuscript Oversight Committee. All volunteer selection is conducted by volunteers through the AHA ECC Committee without AHA staff influence.

The Adult Basic and Advanced Life Support Writing Group included experts in emergency medicine, critical care, cardiology, toxicology, neurology, emergency medical services, education, research, and public health. The Pediatric Basic and Advanced Life Support Writing Group consisted of pediatric clinicians including intensivists, cardiac intensivists, cardiologists, emergency physicians, and emergency medicine nurses. The Neonatal Resuscitation Writing Group included neonatal physicians and nurses with backgrounds in clinical medicine, education, research, and public health, and an obstetrician-gynecologist. The Pediatric Advanced Life Support, Pediatric Basic Life Support, and Neonatal Resuscitation Writing Groups were jointly appointed by AHA and AAP, with a co-chair and half the writing group members appointed by each organization. The Resuscitation Education Science Writing Group consisted of experts in resuscitation education, clinical medicine (ie, pediatrics, intensive care, emergency medicine), nursing, prehospital care, and health services and education research. The Systems of Care Writing Group included experts in clinical medicine, education, research, and public health. The Ethics Writing Group included experts in emergency medicine, critical care medicine, neurocritical care, cardiology, and emergency medical services, including experts in neonatal, pediatric, adult medicine, and medical ethics in each domain. Before appointment, writing group members completed a disclosure of relevant relationships with industry. Writing group members also adhered to all AHA requirements for management of any potential intellectual and financial conflicts of interest. The AAP appointed a writing group member to the Education Science, Ethics, Evidence Evaluation, Special Circumstances, and Systems of Care Writing Groups. For each Writing Group, the exact constituents, affiliations, and conflicts of interest are noted at the end of their respective Guideline chapter.

6Guidelines Development, Review, and Approval

Guidelines Development Review and Approval

Each writing group reviewed all relevant and current AHA guidelines for cardiopulmonary resuscitation and ECC,19-25 pertinent CoSTR evidence and recommendations from 2020 through 2025,1,9,10,26-33 all relevant ILCOR evidence updates, and all AHA evidence evaluation worksheets to determine if current guidelines should be reaffirmed, revised, or retired, or if new recommendations were needed. Recommendations were eligible for retirement if the topic became accepted best practices such that alternative treatments were not considered in practice, no additional studies were published or underway since the last published guideline, and future research was unlikely. All retired recommendations were listed in a table at the end of the guideline chapter with the related PICOST.

Following evaluation of all relevant evidence, the writing groups drafted, reviewed, and approved recommendations, assigning to each recommendation a Class of Recommendation (ie, strength) and Level of Evidence (ie, quality; Table 3). The process for recommendation development followed was as previously described and set forth by the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines.34 Consistent with these standards, recommendations were drafted following the knowledge chunk format applied in the 2020 AHA Guideline.18 Once drafted, each knowledge chunk was reviewed by the AHA ECC Science Subcommittee leadership [Chair (MD), Vice Chair (JB) and Immediate Past Chair (ARP)] for science and content. The knowledge chunks were also reviewed by the AHA Vice President for Science and Innovation (CS) and Senior Science Editors (ID and SS) for consistency of language and format. Feedback was then returned to the writing group for consideration. All final guideline decisions were made by vote of the writing group members and was solely the prevue of the writing group.

6.1Class of Recommendation and Level of Evidence

Table 3. Applying American College of Cardiology/American Heart Association Class of Recommendation and Level of Evidence to Clinical Strategies, Interventions, Treatments, or Diagnostic Testing in Patient Care (Updated December 2024)*

CLASS (STRENGTH) OF RECOMMENDATION

CLASS 1 (STRONG) Benefit >>> Risk

Suggested phrases for writing recommendations:

  • Is recommended
  • Is indicated/useful/effective/beneficial
  • Should be performed/administered/other
  • Comparative-Effectiveness Phrases†:
    • Treatment/strategy A is recommended/indicated in preference to treatment B
    • Treatment A should be chosen over treatment B

CLASS 2a (MODERATE) Benefit >> Risk

Suggested phrases for writing recommendations:

  • Is reasonable
  • Can be useful/effective/beneficial
  • Comparative-Effectiveness Phrases†:
    • Treatment/strategy A is probably recommended/indicated in preference to treatment B
    • It is reasonable to choose treatment A over treatment B

CLASS 2b (WEAK) Benefit > Risk

Suggested phrases for writing recommendations:

  • May/might be reasonable
  • May/might be considered
  • Usefulness/effectiveness is unknown/unclear/uncertain or not well established

CLASS 3: No Benefit (MODERATE) Benefit = Risk
(Generally, LOE A or B use only)

Suggested phrases for writing recommendations:

  • Is not recommended
  • Is not indicated/useful/effective/beneficial
  • Should not be performed/administered/other

CLASS 3: Harm (STRONG) Risk > Benefit

Suggested phrases for writing recommendations:

  • Potentially harmful
  • Causes harm
  • Associated with excess morbidity/mortality
  • Should not be performed/administered/other

LEVEL (QUALITY) OF EVIDENCE

LEVEL A

  • High-quality evidence‡ from more than 1 RCT
  • Meta-analyses of high-quality RCTs
  • One or more RCTs corroborated by high-quality registry studies

LEVEL B-R (Randomized)

  • Moderate-quality evidence‡ from 1 or more RCTs
  • Meta-analyses of moderate-quality RCTs

LEVEL B-NR (Nonrandomized)

  • Moderate-quality evidence‡ from 1 or more well-designed, well-executed nonrandomized studies, observational studies, or registry studies
  • Meta-analyses of such studies

LEVEL C-LD (Limited Data)

  • Randomized or nonrandomized observational or registry studies with limitations of design or execution
  • Meta-analyses of such studies
  • Physiological or mechanistic studies in human subjects

LEVEL C-EO (Expert Opinion)

  • Consensus of expert opinion based on clinical experience

COR and LOE are determined independently (any COR may be paired with any LOE).

A recommendation with LOE C does not imply that the recommendation is weak. Many important clínical questions addressed in guidelines do not lend themselves to clinical trials. Although RCTs are unavailable, there may be a very clear clinical consenus that a particular test or therapy is useful or effective.

* The outcome or result of the intervention should be specified (an improved clinical outcome or increased diagnostic accuracy or incremental prognostic information).

† For comparative-effectiveness recommendations (COR 1 and 2a; LOE A and B only), studies that support the use of comparator verbs should involve direct comparisons of the treatments of the treatments or strategies being evaluated.

‡ The method of assessing quality is evolving, including the application of standardized, widely-used, and preferably validated evidence grading tools; and for systematic reviews, the incorporation of an Evidence Review Committee.

COR indicates Class of Recommendation; EO, expert opinion; LD, limited data; LOE, Level of Evidence; NR, nonrandomized; R, randomized; and RCT, randomized controlled trial.

Each of the 2025 Guidelines articles was submitted for blinded peer review to 5 to 10 subject matter experts nominated by the AHA and AAP, as appropriate. Subject matter expert reviewers were required to disclose relationships with industry and any other potential conflicts of interest, and all disclosures were reviewed by AHA staff. Peer reviewer feedback was provided for guidelines in draft format and again in final format. AHA Policy requires that official positions and guidelines of the AHA must be reviewed and approved by the Board of Directors and/or its Executive Committee. Upon completion of peer review, these guidelines were reviewed and edited for publication by the AHA Science Advisory and Coordinating Committee and the AHA Executive Committee following the standard review process for all official AHA documents with legal, communications and science staff to ensure risk mitigation and accurate alignment with AHA mission. To maintain the highest level of methodological rigor and transparency, the 2025 AHA/ECC Guidelines were developed consistent with the domains of the Appraisal of Guidelines for Research & Evaluation II Instrument35 and a description is provided in an online supplement.

7Management of Potential Conflicts of Interest

Management of Potential Conflicts of Interest

The AHA, AAP, and ILCOR have rigorous conflict of interest policies and procedures to minimize the risk of bias or improper influence during development of the CoSTRs and the AHA guidelines. All 3 organizations followed these policies34,36,37 throughout the 2025 evidence evaluation and document preparation process. Anyone involved in any part of this process was required to disclose all commercial relationships and other potential conflicts (including intellectual) both before joining the writing group and during writing group activities. These disclosures were reviewed before assignment of task force chairs and members, writing group chairs and members, consultants, and peer reviewers. In keeping with the AHA conflict of interest policy, the chair and most members (greater than 50%) of each ILCOR task force and each AHA/AAP writing group had to be free of relevant conflicts. Writing group members were not allowed to participate in drafting recommendation text or to vote on any final recommendation for which they had a relevant conflict. Writing group members who recused themselves from voting on a topic due to a relevant conflict were documented. Importantly, recognizing the potential for conflict for AHA staff, no AHA employee or leadership were involved in guideline recommendation development or voting. Appendix 1 lists group members’ disclosure information for this part. Peer reviewers were also required to disclose relationships with industry and any other potential conflicts of interest; these disclosures appear in Appendix 2.

8Article Information

The American Heart Association requests that this document be cited as follows: Panchal AR, Bartos JA, Wyckoff MH, Drennan IR, Mahgoub M, Schexnayder SM, Rodriguez AJ, Sasson C, Wright JI, Brooks SC, Atkins DL, Del Rios M. Part 2: evidence evaluation and guidelines development: 2025 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 2025;152(suppl 2):S313–S322. doi: 10.1161/ CIR.0000000000001373 

8.1Authors
  • Ashish R. Panchal, MD, PhD, Chair
  • Jason A. Bartos, MD, PhD, Vice Chair
  • Myra H. Wykoff, MD
  • Ian R. Drennan, ACP, PhD
  • Melissa Mahgoub, PhD
  • Stephen M. Schexnayder, MD
  • Amber J. Rodriguez, PhD
  • Comilla Sasson, MD, PhD
  • Jaylen I. Wright, PhD
  • Steven C. Brooks, MD
  • Dianne L. Atkins, MD
  • Marina Del Rios, MD, MS.
8.2Disclosures
9References