Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 17, Issue 3
  • Service evaluation, audit and research: what is the difference?
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Alison Twycross 1 ,
  • Allison Shorten 2
  • 1 Faculty of Health and Social Care , London South Bank University , London , UK
  • 2 Yale University School of Nursing , New Haven, Connecticut , USA
  • Correspondence to : Dr Alison Twycross Faculty of Health and Social Care, London South Bank University, London SE1 0AA, UK; alisontwycross{at}hotmail.com

https://doi.org/10.1136/eb-2014-101871

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Knowing the difference between health service evaluation, audit and research can be tricky especially for the novice researcher. Put simply, nursing research involves finding the answers to questions about “what nurses should do to help patients,” audit examines “whether nurses are doing this , and if not, why not,” 1 and service evaluation asks about “the effect of nursing care on patient experiences and outcomes .” In this paper, we aim to provide some tips to help guide you through the decision-making process as you begin to plan your evaluation, audit or research project. As a starting point box 1 provides key definitions for each type of project.

Box 1 Definitions of service evaluation, audit and research

▸  What is service evaluation?

Service evaluation seeks to assess how well a service is achieving its intended aims. It is undertaken to benefit the people using a particular healthcare service and is designed and conducted with the sole purpose of defining or judging the current service. 2

The results of service evaluations are mostly used to generate information that can be used to inform local decision-making.

▸  What is (clinical) audit?

The English Department of Health 3 states that:

Clinical audit involves systematically looking at the procedures used for diagnosis, care and treatment, examining how associated resources are used and investigating the effect care has on the outcome and quality of life for the patient.

Audit usually involves a quality improvement cycle that measures care against predetermined standards (benchmarking), takes specific actions to improve care and monitors ongoing sustained improvements to quality against agreed standards or benchmarks. 4 , 5

▸  What is research?

Research involves the attempt to extend the available knowledge by means of a systematically defensible process of enquiry. 6

How do I decide whether my project is service evaluation, audit or research?

  • View inline

Key criteria to consider when deciding whether your project is service evaluation, audit or research 2 , 7

So if, for example, we were to explore management of children's postoperative pain we could:

Undertake a service evaluation and ask parents and children to complete a questionnaire about how well they think postoperative pain was managed for them during their experience on the paediatric unit.

Complete an audit by comparing postoperative pain management practices in the paediatric unit to current best practice guidelines using a standardised data collection tool.

Undertake a research project to identify the most effective postoperative pain management practices for children.

Online resource

The Health Research Authority in the UK has a useful online decision-making tool—see:

http://www.hra.nhs.uk/research-community/before-you-apply/determine-whether-your-study-is-research/

Applying ethical principles to service evaluation, audit and research

Ethical standards and patient privacy protection laws apply to all type of health research, service evaluation and audit processes. Research projects being carried out in healthcare will normally need approval from a research ethics committee or affiliated Institutional Review Board (IRB), as well as from the healthcare service site/s such as the hospital's Research and Development Department. If you are carrying out an audit you should register your project with the hospital's Audit Department or Quality and Safety Unit—this is mandatory in some organisations. If you are undertaking a service evaluation you should ensure the necessary permissions have been obtained at a local level or even regional level depending on the service.

A service evaluation or audit may not require specific approval from a research ethics committee or IRB but ethical principles must still be adhered to for the protection of patients. Ethical principles and Patient Protection laws that need to be followed:

Consent —It is important that potential participants are not coerced to take part in the project. They have the right to refuse to take part and to withdraw at any point.

Anonymity —Participants need to know whether their anonymity will be protected and if so how this will be carried out.

Data protection and privacy —You need to consider how you are going to ensure that your data is stored safely and that participant privacy is protected. In the UK you will adhere to the Data Protection Act (1998) and in the USA you will comply with the Health Insurance Privacy and Portability and Accountability Act (HIPPA; 1996) Privacy Rule.

Online resources

The Healthcare Quality Improvement Partnership (HQIP) has a useful guide in relation to applying ethical principles to service evaluations and audits. This can be downloaded from:

www.hqip.org.uk/assets/.../Audit-Research-Service-Evaluation.pdf

The U.S. Department of Health and Community Services, Office of Research Integrity has some useful resources on the principles of ethical research practice for a variety of roles in research http://ori.hhs.gov/

While researchers seek to provide evidence to guide practice, it often takes time for evidence to make the journey from ‘bench to bedside’. When organisations need answers fast, service evaluation and/or audit may be used to capture ‘real-time’ data and quickly move findings to create tangible practice change. An audit is like ‘taking the pulse’ of an organisation—it can produce results fast. As we check the organisational pulse against an expected range of normal we need to be sure we use the best approach to get an accurate reading so that our response is based on good data. This means no matter what the project scope or purpose, your project design should produce high-quality information about patient care and comply with ethical standards that protect patients.

  • ↵ National Research Ethics Service (NRES) . Defining research . 2013 . http://www.nres.nhs.uk/EasySiteWeb/GatewayLink.aspx?alId=355 (accessed 22 Apr 2014) .
  • ↵ Department of Health (2003) cited in What is clinical audit?. http://www.rcpsych.ac.uk/pdf/clinauditchap1.pdf (accessed 28 Apr 2014) .
  • ↵ National Institute for Health and Care Excellence (NICE) . Principles for best practice in clinical audit . Oxford : Radcliffe Medical Press , 2002 .
  • Gerrish K ,
  • ↵ University Hospitals Bristol (2012) Service evaluation. http://bit.ly/1muGwOw (accessed 22 Apr 2014) .

Competing interests None.

Read the full text or download the PDF:

  • Cancer Nursing Practice
  • Emergency Nurse
  • Evidence-Based Nursing
  • Learning Disability Practice
  • Mental Health Practice
  • Nurse Researcher
  • Nursing Children and Young People
  • Nursing Management
  • Nursing Older People
  • Nursing Standard
  • Primary Health Care
  • RCN Nursing Awards
  • Nursing Live
  • Nursing Careers and Job Fairs
  • CPD webinars on-demand
  • --> Advanced -->
|

research proposal service evaluation

  • Clinical articles
  • CPD articles
  • CPD Quizzes
  • Expert advice
  • Clinical placements
  • Study skills
  • Clinical skills
  • University life
  • Person-centred care
  • Career advice
  • Revalidation

Evidence & Practice Previous     Next

Practical guidance on undertaking a service evaluation, pam moule professor of health services research (service evaluation), faculty of health and life sciences, university of west of england, julie armoogum senior lecturer, faculty of health and life sciences, university of west of england, emily dodd clinical trials co-ordinator, faculty of health and life sciences, university of west of england, anne-laure donskoy research partner and survivor researcher, faculty of health and life sciences, university of west of england, emma douglass senior lecturer, faculty of health and life sciences, university of west of england, julie taylor senior lecturer, faculty of health and life sciences, university of west of england, pat turton senior lecturer, faculty of health and life sciences, university of west of england.

This article describes the basic principles of evaluation, focusing on the evaluation of healthcare services. It emphasises the importance of evaluation in the current healthcare environment and the requirement for nurses to understand the essential principles of evaluation. Evaluation is defined in contrast to audit and research, and the main theoretical approaches to evaluation are outlined, providing insights into the different types of evaluation that may be undertaken. The essential features of preparing for an evaluation are considered, and guidance provided on working ethically in the NHS. It is important to involve patients and the public in evaluation activity, offering essential guidance and principles of best practice. The authors discuss the main challenges of undertaking evaluations and offer recommendations to address these, drawing on their experience as evaluators.

Nursing Standard . 30, 45, 46-51. doi: 10.7748/ns.2016.e10277

[email protected]

All articles are subject to external double-blind peer review and checked for plagiarism using automated software.

None declared.

Received: 14 September 2015

Accepted: 17 December 2015

evaluation - evaluation methods - healthcare evaluation - service evaluation - patient involvement - public involvement

User not found

Want to read more?

Already have access log in, 3-month trial offer for £5.25/month.

  • Unlimited access to all 10 RCNi Journals
  • RCNi Learning featuring over 175 modules to easily earn CPD time
  • NMC-compliant RCNi Revalidation Portfolio to stay on track with your progress
  • Personalised newsletters tailored to your interests
  • A customisable dashboard with over 200 topics

Alternatively, you can purchase access to this article for the next seven days. Buy now

Are you a student? Our student subscription has content especially for you. Find out more

research proposal service evaluation

06 July 2016 / Vol 30 issue 45

TABLE OF CONTENTS

DIGITAL EDITION

  • LATEST ISSUE
  • SIGN UP FOR E-ALERT
  • WRITE FOR US
  • PERMISSIONS

Share article: Practical guidance on undertaking a service evaluation

We use cookies on this site to enhance your user experience.

By clicking any link on this page you are giving your consent for us to set cookies.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • How to Write a Research Proposal | Examples & Templates

How to Write a Research Proposal | Examples & Templates

Published on October 12, 2022 by Shona McCombes and Tegan George. Revised on November 21, 2023.

Structure of a research proposal

A research proposal describes what you will investigate, why it’s important, and how you will conduct your research.

The format of a research proposal varies between fields, but most proposals will contain at least these elements:

Introduction

Literature review.

  • Research design

Reference list

While the sections may vary, the overall objective is always the same. A research proposal serves as a blueprint and guide for your research plan, helping you get organized and feel confident in the path forward you choose to take.

Table of contents

Research proposal purpose, research proposal examples, research design and methods, contribution to knowledge, research schedule, other interesting articles, frequently asked questions about research proposals.

Academics often have to write research proposals to get funding for their projects. As a student, you might have to write a research proposal as part of a grad school application , or prior to starting your thesis or dissertation .

In addition to helping you figure out what your research can look like, a proposal can also serve to demonstrate why your project is worth pursuing to a funder, educational institution, or supervisor.

Research proposal aims
Show your reader why your project is interesting, original, and important.
Demonstrate your comfort and familiarity with your field.
Show that you understand the current state of research on your topic.
Make a case for your .
Demonstrate that you have carefully thought about the data, tools, and procedures necessary to conduct your research.
Confirm that your project is feasible within the timeline of your program or funding deadline.

Research proposal length

The length of a research proposal can vary quite a bit. A bachelor’s or master’s thesis proposal can be just a few pages, while proposals for PhD dissertations or research funding are usually much longer and more detailed. Your supervisor can help you determine the best length for your work.

One trick to get started is to think of your proposal’s structure as a shorter version of your thesis or dissertation , only without the results , conclusion and discussion sections.

Download our research proposal template

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research proposal service evaluation

Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We’ve included a few for you below.

  • Example research proposal #1: “A Conceptual Framework for Scheduling Constraint Management”
  • Example research proposal #2: “Medical Students as Mediators of Change in Tobacco Use”

Like your dissertation or thesis, the proposal will usually have a title page that includes:

  • The proposed title of your project
  • Your supervisor’s name
  • Your institution and department

The first part of your proposal is the initial pitch for your project. Make sure it succinctly explains what you want to do and why.

Your introduction should:

  • Introduce your topic
  • Give necessary background and context
  • Outline your  problem statement  and research questions

To guide your introduction , include information about:

  • Who could have an interest in the topic (e.g., scientists, policymakers)
  • How much is already known about the topic
  • What is missing from this current knowledge
  • What new insights your research will contribute
  • Why you believe this research is worth doing

Prevent plagiarism. Run a free check.

As you get started, it’s important to demonstrate that you’re familiar with the most important research on your topic. A strong literature review  shows your reader that your project has a solid foundation in existing knowledge or theory. It also shows that you’re not simply repeating what other people have already done or said, but rather using existing research as a jumping-off point for your own.

In this section, share exactly how your project will contribute to ongoing conversations in the field by:

  • Comparing and contrasting the main theories, methods, and debates
  • Examining the strengths and weaknesses of different approaches
  • Explaining how will you build on, challenge, or synthesize prior scholarship

Following the literature review, restate your main  objectives . This brings the focus back to your own project. Next, your research design or methodology section will describe your overall approach, and the practical steps you will take to answer your research questions.

Building a research proposal methodology
? or  ? , , or research design?
, )? ?
, , , )?
?

To finish your proposal on a strong note, explore the potential implications of your research for your field. Emphasize again what you aim to contribute and why it matters.

For example, your results might have implications for:

  • Improving best practices
  • Informing policymaking decisions
  • Strengthening a theory or model
  • Challenging popular or scientific beliefs
  • Creating a basis for future research

Last but not least, your research proposal must include correct citations for every source you have used, compiled in a reference list . To create citations quickly and easily, you can use our free APA citation generator .

Some institutions or funders require a detailed timeline of the project, asking you to forecast what you will do at each stage and how long it may take. While not always required, be sure to check the requirements of your project.

Here’s an example schedule to help you get started. You can also download a template at the button below.

Download our research schedule template

Example research schedule
Research phase Objectives Deadline
1. Background research and literature review 20th January
2. Research design planning and data analysis methods 13th February
3. Data collection and preparation with selected participants and code interviews 24th March
4. Data analysis of interview transcripts 22nd April
5. Writing 17th June
6. Revision final work 28th July

If you are applying for research funding, chances are you will have to include a detailed budget. This shows your estimates of how much each part of your project will cost.

Make sure to check what type of costs the funding body will agree to cover. For each item, include:

  • Cost : exactly how much money do you need?
  • Justification : why is this cost necessary to complete the research?
  • Source : how did you calculate the amount?

To determine your budget, think about:

  • Travel costs : do you need to go somewhere to collect your data? How will you get there, and how much time will you need? What will you do there (e.g., interviews, archival research)?
  • Materials : do you need access to any tools or technologies?
  • Help : do you need to hire any research assistants for the project? What will they do, and how much will you pay them?

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Once you’ve decided on your research objectives , you need to explain them in your paper, at the end of your problem statement .

Keep your research objectives clear and concise, and use appropriate verbs to accurately convey the work that you will carry out for each one.

I will compare …

A research aim is a broad statement indicating the general purpose of your research project. It should appear in your introduction at the end of your problem statement , before your research objectives.

Research objectives are more specific than your research aim. They indicate the specific ways you’ll address the overarching aim.

A PhD, which is short for philosophiae doctor (doctor of philosophy in Latin), is the highest university degree that can be obtained. In a PhD, students spend 3–5 years writing a dissertation , which aims to make a significant, original contribution to current knowledge.

A PhD is intended to prepare students for a career as a researcher, whether that be in academia, the public sector, or the private sector.

A master’s is a 1- or 2-year graduate degree that can prepare you for a variety of careers.

All master’s involve graduate-level coursework. Some are research-intensive and intend to prepare students for further study in a PhD; these usually require their students to write a master’s thesis . Others focus on professional training for a specific career.

Critical thinking refers to the ability to evaluate information and to be aware of biases or assumptions, including your own.

Like information literacy , it involves evaluating arguments, identifying and solving problems in an objective and systematic way, and clearly communicating your ideas.

The best way to remember the difference between a research plan and a research proposal is that they have fundamentally different audiences. A research plan helps you, the researcher, organize your thoughts. On the other hand, a dissertation proposal or research proposal aims to convince others (e.g., a supervisor, a funding body, or a dissertation committee) that your research topic is relevant and worthy of being conducted.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. & George, T. (2023, November 21). How to Write a Research Proposal | Examples & Templates. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/research-process/research-proposal/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a problem statement | guide & examples, writing strong research questions | criteria & examples, how to write a literature review | guide, examples, & templates, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Service evaluation: A grey area of research?

Affiliation.

  • 1 The University of Edinburgh, UK.
  • PMID: 29262740
  • DOI: 10.1177/0969733017742961

The National Health Service in the United Kingdom categorises research and research-like activities in five ways, such as 'service evaluation', 'clinical audit', 'surveillance', 'usual practice' and 'research'. Only activities classified as 'research' require review by the Research Ethics Committees. It is argued, in this position paper, that the current governance of research and research-like activities does not provide sufficient ethical oversight for projects classified as 'service evaluation'. The distinction between the categories of 'research' and 'service evaluation' can be a grey area. A considerable percentage of studies are considered as non-research and therefore not eligible to be reviewed by the Research Ethics Committee, which scrutinises research proposals rigorously to ensure they conform to established ethical standards, protecting research participants from harm, preserving their rights and providing reassurance to the public. This article explores the ethical discomfort potentially inherent in the activity currently labelled as 'service evaluation'.

Keywords: Ethics principles; ethics review; research; research ethics; service evaluation.

PubMed Disclaimer

Similar articles

  • Views of National Health Service (NHS) Ethics Committee members on how education research should be reviewed. Brown J, Ryland I, Howard J, Shaw N. Brown J, et al. Med Teach. 2007 Mar;29(2-3):225-30. doi: 10.1080/01421590701300179. Med Teach. 2007. PMID: 17701637
  • Achieving an ethical health service: the need for information. HM Queen Elizabeth the Queen Mother Lecture. Holland W. Holland W. J R Coll Physicians Lond. 1995 Jul-Aug;29(4):325-34. J R Coll Physicians Lond. 1995. PMID: 7473329 Free PMC article. No abstract available.
  • Reconsidering 'ethics' and 'quality' in healthcare research: the case for an iterative ethical paradigm. Stevenson FA, Gibson W, Pelletier C, Chrysikou V, Park S. Stevenson FA, et al. BMC Med Ethics. 2015 May 8;16:21. doi: 10.1186/s12910-015-0004-1. BMC Med Ethics. 2015. PMID: 25952678 Free PMC article.
  • Ethics approval, guarantees of quality and the meddlesome editor. Long T, Fallon D. Long T, et al. J Clin Nurs. 2007 Aug;16(8):1398-404. doi: 10.1111/j.1365-2702.2006.01918.x. J Clin Nurs. 2007. PMID: 17655528 Review.
  • Research governance and ethics: a resource for novice researchers. Fontenla M, Rycroft-Malone J. Fontenla M, et al. Nurs Stand. 2006 Feb 15-21;20(23):41-6. doi: 10.7748/ns2006.02.20.23.41.c4069. Nurs Stand. 2006. PMID: 16514927 Review.
  • Impact of Digital Inclusion Initiative to Facilitate Access to Mental Health Services: Service User Interview Study. Oliver A, Chandler E, Gillard JA. Oliver A, et al. JMIR Ment Health. 2024 Jul 26;11:e51315. doi: 10.2196/51315. JMIR Ment Health. 2024. PMID: 39058547 Free PMC article.
  • The well now course: a service evaluation of a health gain approach to weight management. Clarke F, Archibald D, MacDonald V, Huc S, Ellwood C. Clarke F, et al. BMC Health Serv Res. 2021 Aug 30;21(1):892. doi: 10.1186/s12913-021-06836-z. BMC Health Serv Res. 2021. PMID: 34461890 Free PMC article.
  • Revising ethical guidance for the evaluation of programmes and interventions not initiated by researchers. Watson SI, Dixon-Woods M, Taylor CA, Wroe EB, Dunbar EL, Chilton PJ, Lilford RJ. Watson SI, et al. J Med Ethics. 2020 Jan;46(1):26-30. doi: 10.1136/medethics-2018-105263. Epub 2019 Sep 3. J Med Ethics. 2020. PMID: 31481472 Free PMC article.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.

Other Literature Sources

  • scite Smart Citations

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals

You are here

  • Volume 24, Issue 5
  • What to expect when you're evaluating healthcare improvement: a concordat approach to managing collaboration and uncomfortable realities
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-3604-2897 Liz Brewster ,
  • Emma-Louise Aveling ,
  • Graham Martin ,
  • Carolyn Tarrant ,
  • Mary Dixon-Woods ,
  • The Safer Clinical Systems Phase 2 Core Group Collaboration & Writing Committee
  • Correspondence to Dr Liz Brewster, Department of Health Sciences, University of Leicester, Leicester LE1 6TP, UK; eb240{at}le.ac.uk

Evaluation of improvement initiatives in healthcare is essential to establishing whether interventions are effective and to understanding how and why they work in order to enable replication. Although valuable, evaluation is often complicated by tensions and friction between evaluators, implementers and other stakeholders. Drawing on the literature, we suggest that these tensions can arise from a lack of shared understanding of the goals of the evaluation; confusion about roles, relationships and responsibilities; data burdens; issues of data flows and confidentiality; the discomforts of being studied and the impact of disappointing or otherwise unwelcome results. We present a possible approach to managing these tensions involving the co-production and use of a concordat. We describe how we developed a concordat in the context of an evaluation of a complex patient safety improvement programme known as Safer Clinical Systems Phase 2. The concordat development process involved partners (evaluators, designers, funders and others) working together at the outset of the project to agree a set of principles to guide the conduct of the evaluation. We suggest that while the concordat is a useful resource for resolving conflicts that arise during evaluation, the process of producing it is perhaps even more important, helping to make explicit unspoken assumptions, clarify roles and responsibilities, build trust and establish open dialogue and shared understanding. The concordat we developed established some core principles that may be of value for others involved in evaluation to consider. But rather than seeing our document as a ready-made solution, there is a need for recognition of the value of the process of co-producing a locally agreed concordat in enabling partners in the evaluation to work together effectively.

  • Quality improvement
  • Patient safety
  • Evaluation methodology
  • Quality improvement methodologies

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/

https://doi.org/10.1136/bmjqs-2014-003732

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Meaningful evaluation has an essential role in the work of improving healthcare, especially in enabling learning to be shared. 1 Evaluations typically seek to identify the aims of an intervention or programme, find measurable indicators of achievement, collect data on these indicators and assess what was achieved against the original aims. 2 Evaluating whether a programme works is not necessarily the only purpose of evaluation, however: how and why may be equally important questions, 3 , 4 especially in enabling apparently successful interventions to be reproduced. 5 Despite the potential benefits of such efforts, and the welcome given to evaluation by some who run programmes, the literature on programme evaluation has long acknowledged that evaluation can be a source of tension, friction and confusion of purpose: [Evaluation] involves a balancing act between competing forces. Paramount among these is the inherent conflict between the requirements of systematic inquiry and data collection associated with evaluation research and the organizational imperatives of a social program devoted to delivering services and maintaining essential routine activities. 6

Healthcare is no exception to the general problems characteristic of programme evaluation: the concerns and interests of the different parties involved in an improvement project and its associated evaluation may not always converge. These parties may include the designers and implementers of interventions (without whose improvement work there would be nothing to evaluate), the evaluators (who may be a heterogeneous mix of different professional groups - including health professionals and others - or academics from different disciplines) and sometimes funders (who may be funding either the intervention, the evaluation or both). Each may have different goals, perspectives, expectations, priorities and interests, professional languages and norms of practice, and they may have very distinct accountabilities and audiences for their work. As a result, evaluation work may—and in fact, often does—present challenges for all involved, ranging from practicalities such as arranging access to data, through conceptual disagreements about the programme and what it is trying to achieve, to concerns about the impartiality and competence of the evaluation team, widely divergent definitions of success and many others. 6 Given that it is not unlikely these challenges will occur, the important question is how they can optimally be anticipated and managed. 7 , 8

This article seeks to make a practical contribution by presenting a possible approach to minimising the tensions. Specifically, we propose the co-production and use of a concordat —a mutually agreed compact between all parties, which articulates a set of principles to guide the conduct of the evaluation. The article proceeds in two parts. First, we identify the kinds of challenge often faced in the design, running and evaluation of an improvement programme in healthcare. Second, we present an example of the development of a concordat used in the evaluation of a major improvement project.

Challenges in conducting programme evaluations

Areas of possible tension and challenge in programme evaluation identified in the literature.

Securing full consensus on the specifics of evaluation objectives 34

Unpacking contrasting interpretations about what and who the evaluation is for 6

A desire on the part of evaluators to fix the goals for improvement programmes early in the evaluation process 35

Evolution of interventions (intentionally or unintentionally) during implementation 36 and ongoing negotiation about evaluation scope in relation to implementation evolution 37

Fear of evaluation being used for performance management 15

Mismatched interpretations of stakeholders’ own role and other partners’ roles 12 , 14

An interpretation of evaluators as friends or confidants, risking a subsequent sense of betrayal 16

A lack of shared language or understanding if some partners lack familiarity with the methodological paradigm or data collection tools being proposed 13

Conflicts between the burden of evaluation data collection and the work of the programme 2

Previous experiences of the dubious value of evaluation leading to disengagement with current evaluation work 17

Tensions between an imperative to feedback findings and to respect principles of anonymity and confidentiality 38

Encountering the ‘uncomfortable reality’ that a service or intervention is not performing as planned or envisaged and objectives have not been met 18

Negotiations with gatekeepers about access to complete and accurate data in a timely fashion

A reluctance to share evaluation findings if they are seen as against the ‘organisational zeitgeist’ 20 or threaten identity and reputational claims 21

Pressure from partners, research sponsors or funders to alter the content or scope of the evaluation, 20 or to delay their publication 22

A critical first task for all parties is to therefore clarify what is to be achieved through evaluation. This allows an appropriate evaluation design to be formulated, but is also central to establishing a shared vision to underpin activity. This negotiation of purpose may be more or less formal, 11 but should be undertaken. The task is to settle questions about purpose and scope, remembering that agreements about these may unravel over the course of the activity. 12 Constant review and revisiting of the goals of the evaluation (as well as the goals of the improvement programme) may therefore be necessary to maintain dialogue and avoid unwarranted drift.

These early discussions are especially important in ensuring that all parties understand the methods and data collection procedures being used in the evaluation. 13 A lack of shared language and understanding may lead to confusion over why particular methods are being used, generating uncertainties or suspicion and undermining willingness to cooperate. Regardless of what form it takes, the burden of data collection can be off-putting for those being evaluated and those performing the evaluation. If the evaluation itself is too demanding, there may be conflicts between its requirements and doing the work of the programme. 2 For partner organisations, collecting data for evaluation may not seem as much of a priority as delivery, and the issue of who gets to control and benefit from the data they have worked so hard to collect may be difficult to resolve.

Even when agreement on goals and scope is reached early on and remains intact, complex evaluations create a multiplicity of possible lines of communication and accountability, as well as ambiguity about roles. Though the role of each party in a programme evaluation may seem self-evident (eg, one funds, one implements, one evaluates), in practice different parties may have mismatched interpretations both of their own role and of others’. Such blind spots can fatally derail collaborative efforts. 12 The role of the evaluator may be an especially complex one, viewed in different ways by different parties. 14 Outcomes-focused aspects of evaluation—aimed at assessing degree of success in achieving goals—may cast evaluators as ‘performance managers’. 15 But the process-focused aspects of evaluation—particularly where they involve frequent contact between evaluators and evaluated, as is usually the case with ethnographic study—may make evaluators seem like friendly confidants, risking a subsequent sense of betrayal. 16 Thus, evaluators may be seen as critical friends, co-investigators, facilitators or problem solvers by some, but also as unwelcome intruders who sit in judgement but do not get their hands dirty in the real work of delivering the programme and who have influence without responsibility.

Uncertainties about what information should be shared with whom, when and under what conditions may provide a further source of ethical dilemma, especially when unspoken assumptions and expectations are breached, damaging trust and undermining cooperative efforts. Evaluators must often abide by both the imperative to feedback findings to other stakeholders (especially, perhaps, the funders and clients of the evaluation) and to respect principles of anonymity and confidentiality in determining the limits of what can be fed back, to whom and in how much detail. For these reasons, role perceptions and understandings about information exchange (content and direction) need to be surfaced early in the programme—and revisited throughout—to avoid threats to an honest, critical and uncompromised evaluation process. This is especially important given the asymmetry that may arise between the various parties, which can lead to tensions about who is in charge and on what authority.

Sometimes, though perhaps not often, the challenges are such that implementers may feel that obstructing evaluation is more in line with their organisational interests. They may, for example, frustrate attempts to evaluate by providing inaccurate, incomplete or tardy data (quantitative or qualitative) or, where they are able to play the role of ‘gatekeeper’, simply deny access to data or key members of staff. A lack of engagement with the process may be fuelled by previous experiences of evaluation that was felt to be time-consuming or of dubious value. 17

Tensions do not, of course, end when the programme and evaluation are complete, and may indeed intensify when the results are published. Those involved in designing, delivering and funding a programme may set out with great optimism; they may invest huge energy, efforts and resource in a programme; they may be convinced of its benefits and success and they may want to be recognised and congratulated on their hard work and achievement. When evaluation findings are positive, they are likely to be welcomed. Robust evidence of the effectiveness of an intervention can be extremely valuable in providing weight to arguments for its uptake and spread, and positive findings from independent evaluation of large-scale improvement programmes help legitimise claims to success. But not every project succeeds, and an evaluation may result in some participants being confronted with the uncomfortable reality that their service or their intervention has not performed as well as they had hoped. 18 Such findings may provoke reactions of disappointment, anger and challenge: ‘for every evaluation finding there is equal and opposite criticism’. 19

When a programme falls short of realising its goals, analysis of the reasons for failure can produce huge net benefits for the wider community, not least in ensuring that future endeavours do not repeat the same mistakes. 2 But recognising this value can be difficult given the immediate disappointment that comes with failure. If the evaluation—and the resulting publications—does not present the organisation(s) involved in the intervention in a positive light, there may be a reluctance to ‘wash dirty linen in public’ 2 and resistance to the implications of findings, 20 especially where they threaten reputation. 21 Evaluators themselves may not be immune to pressures to compromise their impartiality. The literature contains cautionary examples of pressure from partners or research sponsors who wish to direct the content of the report or analysis, 20 or coercion from funders to limit the scope of evaluation, distort results or critically delay their publication. 22

A possible solution: developing a concordat

Managing risks of conflict and tension in the evaluation of improvement programmes.

All parties should agree on the purpose and scope of the evaluation upfront, but recognise that both may mutate over time and need to be revisited

An explicit statement of roles may ensure that understandings of the division of labour within an evaluation—and the responsibilities and relationships that imply—are shared

The expectations placed on each party in relation to data collection should be reasonable and feasible, and the methodological approach (in its basic principles) should be understood by all parties

Clear terms of reference concerning disclosure, dissemination and the limits of confidentiality are necessary from the start

All efforts should be made to avoid implementers experiencing discomfort about being studied: through ensuring all parties are fully briefed about the evaluation; sharing formative findings and ensuring appropriate levels of anonymity in reporting findings

Commitment to learning for the greatest collective benefit is the overriding duty of all parties involved—it follows from this that all parties should make an explicit commitment to ensuring sincere, honest and impartial reporting of evaluation findings

The programme we discuss, known as Safer Clinical Systems Phase 2, was a complex intervention in which eight organisations were trained to apply a new approach (adapted from high-risk industries) to the detection and management of risk in clinical settings. 26 The work was highly customised to the particularities of these settings. The programme involved a complicated nexus of actors, including the funder (the Health Foundation, a UK healthcare improvement charitable foundation); the technical support team (based at the University of Warwick Medical School), who designed the approach and provided training and support for the participating sites over a 2-year period; the eight healthcare organisations (‘implementers’) and the evaluation team (itself a three-university partnership led by the University of Leicester).

Developing the concordat and its content

The evaluation team drew on the literature and previous experience to anticipate potential points of conflict or frustration and to identify principles and values that could govern the relationships and promote cooperation. These were drawn together into the first draft of a document that we called a ‘concordat’. The evaluation team came up with the initial draft, which was then subject to extensive comment, discussion, refinement and revision by the technical support team and funders. The document went through multiple drafts based on feedback, including several meetings where evaluators, technical team and funders came up with possible areas of conflict and possible scenarios illustrating tensions, and tested these against the concordat. Once the final draft was agreed, it was signed by all three parties and shared with the participating sites.

The concordat in outline

Goals and values —outlining the partners and their commitment to the programme goal, shared learning, respect for dignity and integrity and open dialogue

Responsibilities of the evaluation team —summarising the purpose of the evaluation, making a commitment to accuracy in representation and reporting and seeking to minimise the burden on partners

Responsibilities of the support team —a synopsis of the remit of one partner's role in relation to the evaluation team and their agreed interaction

Responsibilities of participating sites— outlining how the sites will facilitate access to data for the evaluation team

Data collection— agreeing steps to minimise the burden of data collection on all partners and to share data as appropriate

Ethical issues —summarising issues about confidentiality, data security and working within appropriate ethical and governance frameworks

Publications —confirming a commitment to timely publication of findings, paying particular attention to the possibility of negative or critical findings

Feedback —outlining how formative feedback should be provided, received and actioned by appropriate partners

The concordat then sets out the roles and responsibilities of each party, including, for example, an obligation to be even-handed for the evaluation team, and the commitment to sharing information openly on the part of the technical support team ( box 3 ). The concordat also articulated the relationships between the different parties, emphasising the importance of critical distance and stressing that this was not a relationship of performance management. The concordat further sought to address potential disagreements relating to the measures used in the evaluation. Rather than delineate an exhaustive list of what those methods and data would be, the concordat sets out the process through which measures would be negotiated and determined, and made explicit the principles concerning requests for and provision of data that would underpin this process (eg, the evaluation team should minimise duplicative demands for data by the evaluation team, and the participating sites should provide timely and accurate data).

The values and ethical imperatives governing action and interactions were also made explicit; for example, arrangements around confidentiality, anonymity and dissemination were addressed, including expectations relating to authorship of published outputs. Principles relating to research governance and feedback sought both to mitigate unease at the prospect of evaluation while also enshrining certain inalienable principles that are required for high-quality evaluation: for example, it committed all parties to sharing outputs ahead of publication, but it also protected the impartiality of the evaluation team by making clear that they had the final say in the interpretation and presentation of evaluation findings (though this did not preclude other partners from publishing their own work). Importantly, the concordat sets out a framework that all parties committed to following if disputes did arise. These principles were invoked on a number of occasions during the Safer Clinical Systems evaluation, for example, when trying to reach agreement on measurement or to resolve ambiguities in the roles of the evaluation and support teams. The concordat was also invaluable in ensuring that boundaries and expectations did not have to be continually re-negotiated in response to organisational turbulence, given that the programme experienced frequent changes of personnel over its course.

Challenges in developing and using the concordat

Of course, neither the process nor the outcome of the concordat for this evaluation was without wrinkles. Some issues arose that had not been anticipated, and some tensions encountered from the start of the programme continued to cause difficulties. These challenges were in some respects unique to this particular context, but may provide general lessons to inform future evaluation work. For instance, the technical support team was charged with undertaking ‘learning capture’, which was not always easy to distinguish from evaluation, and it proved difficult to maintain clear boundaries about this scope. Future projects would benefit from earlier clarification of scope and roles.

The concordat took considerable time to develop and agree—around 6 months—in part because the process for developing the concordat was being worked on at the same time as developing the concordat itself. One consequence of this was that the participating sites (the implementers) were only given the opportunity to comment rather than engage as full partners. Future iterations should attempt to involve all parties earlier. We share this concordat and its process of development in part to facilitate the speedier creation of future similar agreements.

The concordat as a solution: how does developing a concordat support effective collaborative activity?

The development of a concordat makes concrete the principles underpinning evaluation as a collaborative activity, and the concordat itself has value as a symbolic, practical and actionable tool for setting expectations and supporting conflict resolution.

The concordat as a document provides mutually agreed foundational principles which can be revisited when difficulties arise. In this sense, the concordat has value as a guide and point of reference. 21 It also serves a symbolic function, in that it signals recognition—by all parties—of the centrality and importance of collaboration and a shared commitment to the process of evaluation. Formalising a collaborative agreement between parties, in the form of a non-binding contract, has the potential to promote a cooperative orientation among the parties involved and build trust. 27 , 28 That the concordat is written and literally signed up to by all parties is important, as this institutionalisation of the concordat makes it less susceptible to distortion over time and better able to ensure that mutual understanding is more than superficial. Further, because it is explicitly not a contract, it offers a means of achieving agreement on core principles, goals and values separate from any legal commitments, and it leaves open the possibility of negotiation and renegotiation.

Much of the value in developing a concordat, however, lies in the process of co-production by all parties—a case of ‘all plans are useless, but planning is indispensable’. Though we did not directly evaluate its use, we feel that its development had a number of benefits for all stakeholders. First, rather than waiting for contradictions to materialise as disruptive conflicts that impede the evaluation, the process of discussing and (re)drafting a concordat offers an opportunity to anticipate, identify and make explicit differences in interpretations and perspectives on various aspects of the joint activity. Each party must engage in a process of surfacing and reflecting on their own assumptions, interpretations and interests, and sharing these with other parties. This allows difference and alternative interpretations to be openly acknowledged (rather than denied or ignored)—a respectful act of recognition and a prerequisite of open dialogue. 29 , 30 Thus, the production of the concordat acts as a mechanism for establishing the kind of open dialogue and shared understanding so commonly exhorted.

Second, by explicitly reflecting on and articulating the various roles and contributions of each party, the concordat-building process helps to foreground the contribution that each partner makes to the project and its evaluation, showing that all are interdependent and necessary. 7 This emphasis on the distributed nature of contributions can help to offset the dominance of asymmetrical, hierarchical positionings (such as evaluator and evaluated, funder and funded, for example). 21 It can therefore enable all those involved to see the opportunities as well as the challenges within an evaluation process, and reinforce a shared understanding of the value of a systematic, well-conducted evaluation.

Conclusions

Programme evaluation is important to advancing the science of improvement. But it is unrealistic to suppose that there will be no conflict within an evaluation situation involving competing needs, priorities and interests: the management of these tensions is key to ensuring that a productive collaboration is maintained. Drawing on empirical and theoretical literature, and our own experience, we have outlined a practical approach—co-production and use of a concordat—designed to optimise and sustain the collaboration on which evaluation activity depends. A concordat is no substitute for sincere, faithful commitment to an ethic of learning on the part of all involved parties, 31 and even with goodwill from all parties, it may not succeed in eliminating discord entirely. Nonetheless, in complex, challenging situations, having a clear set of values and principles that all parties have worked through is better than not having one.

A concordat offers a useful component in planning an evaluation that runs smoothly by providing a framework for both anticipating and resolving conflict in collaborative activity. This approach is premised on recognition that evaluation depends on collaboration between diverse parties, and is therefore, by its collective nature, prone to tension about multiple areas of practice. 32 Key to the potential of a concordat is its value, first, as an institutionalised agreement to be used as a framework for conflict resolution during evaluation activity, and, second, as a mechanism through which potential conflicts can be anticipated, made explicit and acknowledged before they arise, thereby establishing dialogue and a shared understanding of the purpose, roles, methods and procedures entailed in the evaluation.

The concordat we developed for the Safer Clinical Systems evaluation (see online supplementary appendix 1) is not intended to be used directly as template for others, although, with appropriate acknowledgement, its principles could potentially be adapted and used. Understanding the principles behind the use of a concordat (how and why it works) is critical. 33 In accordance with the rationale behind the concordat approach, we do not advocate that other collaborations simply adopt this example of a concordat ‘as is’. To do so would eliminate a crucial component of its value—the process of collective co-production. The process of articulating potential challenges in the planned collaboration, and testing drafts of the concordat against these, is particularly important in helping to uncover the implicit assumptions and expectations held by different parties, and to identify ambiguities about roles and relationships. All parties must be involved, in order to secure local ownership and capitalise on the opportunity to anticipate and surface tensions, establish dialogue and a shared vision and foreground the positive interdependence of all parties.

Acknowledgments

We thank all of the many organisations and individuals who participated in this programme for their generosity and support, and our collaborators and the advisory group for the programme. Mary Dixon-Woods thanks the University of Leicester for grant of study leave and the Dartmouth Institute for Health Policy and Clinical Practice for hosting her during the writing of this paper.

  • Dixon-Woods M ,
  • Aveling EL , et al
  • Davidoff F ,
  • Leviton L , et al
  • Hoffmann TC ,
  • Glasziou PP ,
  • Boutron I , et al
  • Stevahn L ,
  • Van Elteren AHG ,
  • Nierse CJ , et al
  • Mero-Jaffe I
  • Sharkey S ,
  • Van Vlaerenden H
  • Gillespie A ,
  • Richardson B
  • Schwandt TA ,
  • Martin GP ,
  • Hendy J , et al
  • Desautels G ,
  • Aveling EL ,
  • Jovchelovitch S
  • Widdershoven G
  • Tarrant C , et al
  • Malhotra D ,
  • Murnighan JK
  • Irlenbusch B ,
  • Engeström Y
  • Beauchamp TL ,
  • Ledermann S
  • Dixon-Woods M
  • Saparito P ,

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Files in this Data Supplement:

  • Data supplement 1 - Online appendix

Collaborators The Safer Clinical Systems Phase 2 Core Group Collaboration & Writing Committee, Nick Barber, Julian Bion, Matthew Cooke, Steve Cross, Hugh Flanagan, Christine Goeschel, Rose Jarvis, Peter Pronovost and Peter Spurgeon.

Contributors All authors contributed to and approved the final manuscript.

Funding Wellcome Trust, Health Foundation.

Competing interests Mary Dixon-Woods is deputy editor-in-chief of BMJ Quality and Safety .

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

Service Evaluation

  • Skip to Navigation
  • Accessibility

BSL    Help in a crisis

twitter

  • Our Portfolio
  • Patients and Public
  • Partners and Infrastructure
  • Get Involved - staff

Evaluation hero

A service evaluation will seek to answer “What standard does this service achieve?”. It won’t reference a predetermined standard and will not change the care a patient receives.

Where an evaluation involves patients and staff, it is best practice to seek their permission, through consent, for their participation. The degree to which consent is taken can vary from asking someone if it is OK to seek their feedback on the service, through to a formal process using consent and information leaflets.

What is a service evaluation?

Many people make claims about their services without sound evidence to inform their judgements. A well planned and executed service evaluation will provide:

  • Evidence to demonstrate  value for money .
  • A  baseline from which to measure change .
  • Evidence to demonstrate  effectiveness .
  • Evidence to demonstrate  efficiency .
  • Evidence to demonstrate  benefits and added value .

Service evaluations do not require NHS Research Ethics review but do need to be registered with, and approved by, the Research and Evidence Department before they can commence.   This will include checks that the project is feasible (e.g., the service has capacity, appropriate timeframes) and carried out in line with Trust standards. Registration ensures the R&E Department know what type of activity (research and service evaluations) is happening in which services.

Please note, the Trust primarily processes data to deliver healthcare. The data cannot be used for other purposes unless the law allows ‘secondary processing’. Research and service evaluation are both considered secondary processing.

Data collected through service evaluations must be obtained, recorded and stored as defined by all relevant data standards, including The General Data Protection Regulation (GDPR) and the Data Protection Act 2018. At all times your evaluation must be designed in such a way that privacy is built into all aspects that involve data.

A great source of information is also available from the  Evaluation Works .

What do I need to do to undertake a service evaluation?

Be clear about what you want to evaluate.

This will shape how you conduct the evaluation and define what information you will need to collect and where the data is. Your project should also link to at least one of the Trust’s Quality Priorities.

Identify all stakeholders

Stakeholders in the success of your evaluation may include, amongst others service managers, commissioners, staff and patients. Be sure to engage them as early as possible to ensure they understand your work. Involving them early will help you shape your application to ensure it is successfully delivered.

Plan your project

Planning is paramount and needs to include an honest assessment of how long it will take to produce the necessary paperwork and seek approvals. You must also consider how and where data will be recorded and stored. Wherever possible you should use non-identifiable data. Data must be anonymised* or pseudonymised and kept secure within the Trust. The Information Commissioner's Office define ‘anonymised data’ as “…data that does not itself identify any individual and that is unlikely to allow any individual to be identified through its combination with other data.” Where agreed by the Research & Evidence Department, only fully anonymised data may be taken outside the Trust (e.g. to be stored on university servers).

All project members – Trust employees and those on honorary contracts – are required to have completed the Trust’s annual Information Governance (IG) training. A paper version can be provided where members do not have electronic access. Project members receiving anonymised data only are not required to complete the Trust’s IG training.

Develop your paperwork

The Service Evaluation (SE1) form sets out the information you need to provide. This covers the aims, objectives (what you will do to meet the aims), methodology (including data handling), analysis, and dissemination. Please follow the additional guidance on the SE1 form so that the form is completed in full.

Depending on how you plan to gather the data for your evaluation, you may need a participant information sheet and a consent form. You should seek appropriate informed consent from participants if you are asking them to do more than what is in the Friends & Family Test. Consent should be explicit verbal or written (written consent where participants are identifiable or where their identifiable data is involved, or qualitative methods are being used). The Research and Evidence (R&E) department can help you determine what you will need; templates are available.

All forms must be version controlled to ensure approved ones are being used.

Your project must be approved by, and registered with, the R&E department.

Prior to submission, applicants are encouraged to discuss their proposed project at one of the Trust’s monthly Research Clinics via MS Teams. This can help to improve the application. Please contact [email protected] to book on to a Research Clinic.

Apply to register your project with the Research and Evidence department

The completed SE1 form and any additional documents (e.g., copies of data collection forms and any interview topic guides) should be sent to [email protected] . Incomplete forms will be returned to the applicant to request completion. The review will not start until we have received a completed SE1 form and all relevant accompanying documents listed on the SE1 form. Please contact the R&E Department if you need advice.

Following an initial review from the R&E Compliance team, it is likely you will be invited to discuss your application at a Service Evaluation Review Panel (SERP) . This is a supportive discussion to help approve the project. The SERP will comprise representatives from relevant departments (e.g., R&E, Information Assurance) and allow applications to be processed more efficiently as it gives the applicant the opportunity to address any queries or concerns.

Applicants are expected to address any outstanding actions resulting from the SERP and submit responses. The application will be approved when any actions have been completed.

For non-Trust staff, a research passport / Letter of Access may be required where the applicant does not hold a contract with the Trust. This will be addressed at the SERP.

Applicants must not start their service evaluation until they receive written approval from the R&E Department.

Conducting your project

With good planning and engaged stakeholders your project should run according to plan. Where we see problems arise it is usually because of something that was not considered, or planning was not thorough enough.

Where relevant, the service evaluation lead will be asked to provide ‘recruitment’ figures quarterly to the R&E Department. This is so we can monitor the level of evaluation activity in the Trust. The study team should notify the R&E Dept when they have stopped recruiting participants.

Where amendments to the evaluation are necessary (e.g., timeframes, changes to the study team), the applicant should discuss and agree these with the R&E Compliance Team beforehand. This should prevent deviations to the approved study.

Writing up your project

It is important that your project is written up at the end to ensure what was learnt can be shared and used to make improvements. A Final Report (FR1) template is available. This is the minimum feedback that should be provided. You should send the final evaluation report to [email protected] so that we can put a copy on our Connect page.

In addition to providing the final report, the study team should present to the service or other relevant Trust meetings if requested.

We also want to capture the impact of your service evaluation so please give some thought as to how this can be achieved.

Forms, templates and guidance documents for service evaluations

  • SE1 Form - Application for approval to conduct a Service Evaluation template v5.1.docx [docx] 53KB  – for applying to seek approval for a service evaluation
  • GD-E002 Service Evaluations guidance for applicants v1.1 Jan22.pdf [pdf] 251KB  – for additional guidance on applying to do a service evaluation
  • FR1 Form - Service Evaluation Final Report template (v2.0).docx [docx] 34KB  – a template for a Final Report
  • T_SE Participant Information Sheet v1.0 Feb22.docx [docx] 328KB – a template with suggested text for a Participant Information Sheet
  • T_SE Consent Form v1.0 Feb22.docx [docx] 333KB – a template with suggested text for a Consent Form

Privacy policy Cookie policy Terms and conditions Accessibility statement Modern slavery statement Disclaimer Site map

Powered by VerseOne Technologies Ltd

© Nottinghamshire Healthcare NHS Foundation Trust 2024

We use cookies to personalise your user experience and to study how our website is being used. You consent to our cookies if you continue to use this website. You can at any time read our cookie policy .

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Science and Public Policy
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, 2. background, 4. findings, 5. discussion, 6. conclusion and final remarks, supplementary material, data availability, conflict of interest statement., acknowledgements.

  • < Previous

Evaluation of research proposals by peer review panels: broader panels for broader assessments?

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Rebecca Abma-Schouten, Joey Gijbels, Wendy Reijmerink, Ingeborg Meijer, Evaluation of research proposals by peer review panels: broader panels for broader assessments?, Science and Public Policy , Volume 50, Issue 4, August 2023, Pages 619–632, https://doi.org/10.1093/scipol/scad009

  • Permissions Icon Permissions

Panel peer review is widely used to decide which research proposals receive funding. Through this exploratory observational study at two large biomedical and health research funders in the Netherlands, we gain insight into how scientific quality and societal relevance are discussed in panel meetings. We explore, in ten review panel meetings of biomedical and health funding programmes, how panel composition and formal assessment criteria affect the arguments used. We observe that more scientific arguments are used than arguments related to societal relevance and expected impact. Also, more diverse panels result in a wider range of arguments, largely for the benefit of arguments related to societal relevance and impact. We discuss how funders can contribute to the quality of peer review by creating a shared conceptual framework that better defines research quality and societal relevance. We also contribute to a further understanding of the role of diverse peer review panels.

Scientific biomedical and health research is often supported by project or programme grants from public funding agencies such as governmental research funders and charities. Research funders primarily rely on peer review, often a combination of independent written review and discussion in a peer review panel, to inform their funding decisions. Peer review panels have the difficult task of integrating and balancing the various assessment criteria to select and rank the eligible proposals. With the increasing emphasis on societal benefit and being responsive to societal needs, the assessment of research proposals ought to include broader assessment criteria, including both scientific quality and societal relevance, and a broader perspective on relevant peers. This results in new practices of including non-scientific peers in review panels ( Del Carmen Calatrava Moreno et al. 2019 ; Den Oudendammer et al. 2019 ; Van den Brink et al. 2016 ). Relevant peers, in the context of biomedical and health research, include, for example, health-care professionals, (healthcare) policymakers, and patients as the (end-)users of research.

Currently, in scientific and grey literature, much attention is paid to what legitimate criteria are and to deficiencies in the peer review process, for example, focusing on the role of chance and the difficulty of assessing interdisciplinary or ‘blue sky’ research ( Langfeldt 2006 ; Roumbanis 2021a ). Our research primarily builds upon the work of Lamont (2009) , Huutoniemi (2012) , and Kolarz et al. (2016) . Their work articulates how the discourse in peer review panels can be understood by giving insight into disciplinary assessment cultures and social dynamics, as well as how panel members define and value concepts such as scientific excellence, interdisciplinarity, and societal impact. At the same time, there is little empirical work on what actually is discussed in peer review meetings and to what extent this is related to the specific objectives of the research funding programme. Such observational work is especially lacking in the biomedical and health domain.

The aim of our exploratory study is to learn what arguments panel members use in a review meeting when assessing research proposals in biomedical and health research programmes. We explore how arguments used in peer review panels are affected by (1) the formal assessment criteria and (2) the inclusion of non-scientific peers in review panels, also called (end-)users of research, societal stakeholders, or societal actors. We add to the existing literature by focusing on the actual arguments used in peer review assessment in practice.

To this end, we observed ten panel meetings in a variety of eight biomedical and health research programmes at two large research funders in the Netherlands: the governmental research funder The Netherlands Organisation for Health Research and Development (ZonMw) and the charitable research funder the Dutch Heart Foundation (DHF). Our first research question focuses on what arguments panel members use when assessing research proposals in a review meeting. The second examines to what extent these arguments correspond with the formal −as described in the programme brochure and assessment form− criteria on scientific quality and societal impact creation. The third question focuses on how arguments used differ between panel members with different perspectives.

2.1 Relation between science and society

To understand the dual focus of scientific quality and societal relevance in research funding, a theoretical understanding and a practical operationalisation of the relation between science and society are needed. The conceptualisation of this relationship affects both who are perceived as relevant peers in the review process and the criteria by which research proposals are assessed.

The relationship between science and society is not constant over time nor static, yet a relation that is much debated. Scientific knowledge can have a huge impact on societies, either intended or unintended. Vice versa, the social environment and structure in which science takes place influence the rate of development, the topics of interest, and the content of science. However, the second part of this inter-relatedness between science and society generally receives less attention ( Merton 1968 ; Weingart 1999 ).

From a historical perspective, scientific and technological progress contributed to the view that science was valuable on its own account and that science and the scientist stood independent of society. While this protected science from unwarranted political influence, societal disengagement with science resulted in less authority by science and debate about its contribution to society. This interdependence and mutual influence contributed to a modern view of science in which knowledge development is valued both on its own merit and for its impact on, and interaction with, society. As such, societal factors and problems are important drivers for scientific research. This warrants that the relation and boundaries between science, society, and politics need to be organised and constantly reinforced and reiterated ( Merton 1968 ; Shapin 2008 ; Weingart 1999 ).

Glerup and Horst (2014) conceptualise the value of science to society and the role of society in science in four rationalities that reflect different justifications for their relation and thus also for who is responsible for (assessing) the societal value of science. The rationalities are arranged along two axes: one is related to the internal or external regulation of science and the other is related to either the process or the outcome of science as the object of steering. The first two rationalities of Reflexivity and Demarcation focus on internal regulation in the scientific community. Reflexivity focuses on the outcome. Central is that science, and thus, scientists should learn from societal problems and provide solutions. Demarcation focuses on the process: science should continuously question its own motives and methods. The latter two rationalities of Contribution and Integration focus on external regulation. The core of the outcome-oriented Contribution rationality is that scientists do not necessarily see themselves as ‘working for the public good’. Science should thus be regulated by society to ensure that outcomes are useful. The central idea of the process-oriented Integration rationality is that societal actors should be involved in science in order to influence the direction of research.

Research funders can be seen as external or societal regulators of science. They can focus on organising the process of science, Integration, or on scientific outcomes that function as solutions for societal challenges, Contribution. In the Contribution perspective, a funder could enhance outside (societal) involvement in science to ensure that scientists take responsibility to deliver results that are needed and used by society. From Integration follows that actors from science and society need to work together in order to produce the best results. In this perspective, there is a lack of integration between science and society and more collaboration and dialogue are needed to develop a new kind of integrative responsibility ( Glerup and Horst 2014 ). This argues for the inclusion of other types of evaluators in research assessment. In reality, these rationalities are not mutually exclusive and also not strictly separated. As a consequence, multiple rationalities can be recognised in the reasoning of scientists and in the policies of research funders today.

2.2 Criteria for research quality and societal relevance

The rationalities of Glerup and Horst have consequences for which language is used to discuss societal relevance and impact in research proposals. Even though the main ingredients are quite similar, as a consequence of the coexisting rationalities in science, societal aspects can be defined and operationalised in different ways ( Alla et al. 2017 ). In the definition of societal impact by Reed, emphasis is placed on the outcome : the contribution to society. It includes the significance for society, the size of potential impact, and the reach , the number of people or organisations benefiting from the expected outcomes ( Reed et al. 2021 ). Other models and definitions focus more on the process of science and its interaction with society. Spaapen and Van Drooge introduced productive interactions in the assessment of societal impact, highlighting a direct contact between researchers and other actors. A key idea is that the interaction in different domains leads to impact in different domains ( Meijer 2012 ; Spaapen and Van Drooge 2011 ). Definitions that focus on the process often refer to societal impact as (1) something that can take place in distinguishable societal domains, (2) something that needs to be actively pursued, and (3) something that requires interactions with societal stakeholders (or users of research) ( Hughes and Kitson 2012 ; Spaapen and Van Drooge 2011 ).

Glerup and Horst show that process and outcome-oriented aspects can be combined in the operationalisation of criteria for assessing research proposals on societal aspects. Also, the funders participating in this study include the outcome—the value created in different domains—and the process—productive interactions with stakeholders—in their formal assessment criteria for societal relevance and impact. Different labels are used for these criteria, such as societal relevance , societal quality , and societal impact ( Abma-Schouten 2017 ; Reijmerink and Oortwijn 2017 ). In this paper, we use societal relevance or societal relevance and impact .

Scientific quality in research assessment frequently refers to all aspects and activities in the study that contribute to the validity and reliability of the research results and that contribute to the integrity and quality of the research process itself. The criteria commonly include the relevance of the proposal for the funding programme, the scientific relevance, originality, innovativeness, methodology, and feasibility ( Abdoul et al. 2012 ). Several studies demonstrated that quality is seen as not only a rich concept but also a complex concept in which excellence and innovativeness, methodological aspects, engagement of stakeholders, multidisciplinary collaboration, and societal relevance all play a role ( Geurts 2016 ; Roumbanis 2019 ; Scholten et al. 2018 ). Another study showed a comprehensive definition of ‘good’ science, which includes creativity, reproducibility, perseverance, intellectual courage, and personal integrity. It demonstrated that ‘good’ science involves not only scientific excellence but also personal values and ethics, and engagement with society ( Van den Brink et al. 2016 ). Noticeable in these studies is the connection made between societal relevance and scientific quality.

In summary, the criteria for scientific quality and societal relevance are conceptualised in different ways, and perspectives on the role of societal value creation and the involvement of societal actors vary strongly. Research funders hence have to pay attention to the meaning of the criteria for the panel members they recruit to help them, and navigate and negotiate how the criteria are applied in assessing research proposals. To be able to do so, more insight is needed in which elements of scientific quality and societal relevance are discussed in practice by peer review panels.

2.3 Role of funders and societal actors in peer review

National governments and charities are important funders of biomedical and health research. How this funding is distributed varies per country. Project funding is frequently allocated based on research programming by specialised public funding organisations, such as the Dutch Research Council in the Netherlands and ZonMw for health research. The DHF, the second largest private non-profit research funder in the Netherlands, provides project funding ( Private Non-Profit Financiering 2020 ). Funders, as so-called boundary organisations, can act as key intermediaries between government, science, and society ( Jasanoff 2011 ). Their responsibility is to develop effective research policies connecting societal demands and scientific ‘supply’. This includes setting up and executing fair and balanced assessment procedures ( Sarewitz and Pielke 2007 ). Herein, the role of societal stakeholders is receiving increasing attention ( Benedictus et al. 2016 ; De Rijcke et al. 2016 ; Dijstelbloem et al. 2013 ; Scholten et al. 2018 ).

All charitable health research funders in the Netherlands have, in the last decade, included patients at different stages of the funding process, including in assessing research proposals ( Den Oudendammer et al. 2019 ). To facilitate research funders in involving patients in assessing research proposals, the federation of Dutch patient organisations set up an independent reviewer panel with (at-risk) patients and direct caregivers ( Patiëntenfederatie Nederland, n.d .). Other foundations have set up societal advisory panels including a wider range of societal actors than patients alone. The Committee Societal Quality (CSQ) of the DHF includes, for example, (at-risk) patients and a wide range of cardiovascular health-care professionals who are not active as academic researchers. This model is also applied by the Diabetes Foundation and the Princess Beatrix Muscle Foundation in the Netherlands ( Diabetesfonds, n.d .; Prinses Beatrix Spierfonds, n.d .).

In 2014, the Lancet presented a series of five papers about biomedical and health research known as the ‘increasing value, reducing waste’ series ( Macleod et al. 2014 ). The authors addressed several issues as well as potential solutions that funders can implement. They highlight, among others, the importance of improving the societal relevance of the research questions and including the burden of disease in research assessment in order to increase the value of biomedical and health science for society. A better understanding of and an increasing role of users of research are also part of the described solutions ( Chalmers et al. 2014 ; Van den Brink et al. 2016 ). This is also in line with the recommendations of the 2013 Declaration on Research Assessment (DORA) ( DORA 2013 ). These recommendations influence the way in which research funders operationalise their criteria in research assessment, how they balance the judgement of scientific and societal aspects, and how they involve societal stakeholders in peer review.

2.4 Panel peer review of research proposals

To assess research proposals, funders rely on the services of peer experts to review the thousands or perhaps millions of research proposals seeking funding each year. While often associated with scholarly publishing, peer review also includes the ex ante assessment of research grant and fellowship applications ( Abdoul et al. 2012 ). Peer review of proposals often includes a written assessment of a proposal by an anonymous peer and a peer review panel meeting to select the proposals eligible for funding. Peer review is an established component of professional academic practice, is deeply embedded in the research culture, and essentially consists of experts in a given domain appraising the professional performance, creativity, and/or quality of scientific work produced by others in their field of competence ( Demicheli and Di Pietrantonj 2007 ). The history of peer review as the default approach for scientific evaluation and accountability is, however, relatively young. While the term was unheard of in the 1960s, by 1970, it had become the standard. Since that time, peer review has become increasingly diverse and formalised, resulting in more public accountability ( Reinhart and Schendzielorz 2021 ).

While many studies have been conducted concerning peer review in scholarly publishing, peer review in grant allocation processes has been less discussed ( Demicheli and Di Pietrantonj 2007 ). The most extensive work on this topic has been conducted by Lamont (2009) . Lamont studied peer review panels in five American research funding organisations, including observing three panels. Other examples include Roumbanis’s ethnographic observations of ten review panels at the Swedish Research Council in natural and engineering sciences ( Roumbanis 2017 , 2021a ). Also, Huutoniemi was able to study, but not observe, four panels on environmental studies and social sciences of the Academy of Finland ( Huutoniemi 2012 ). Additionally, Van Arensbergen and Van den Besselaar (2012) analysed peer review through interviews and by analysing the scores and outcomes at different stages of the peer review process in a talent funding programme. In particular, interesting is the study by Luo and colleagues on 164 written panel review reports, showing that the reviews from panels that included non-scientific peers described broader and more concrete impact topics. Mixed panels also more often connected research processes and characteristics of applicants with impact creation ( Luo et al. 2021 ).

While these studies primarily focused on peer review panels in other disciplinary domains or are based on interviews or reports instead of direct observations, we believe that many of the findings are relevant to the functioning of panels in the context of biomedical and health research. From this literature, we learn to have realistic expectations of peer review. It is inherently difficult to predict in advance which research projects will provide the most important findings or breakthroughs ( Lee et al. 2013 ; Pier et al. 2018 ; Roumbanis 2021a , 2021b ). At the same time, these limitations may not substantiate the replacement of peer review by another assessment approach ( Wessely 1998 ). Many topics addressed in the literature are inter-related and relevant to our study, such as disciplinary differences and interdisciplinarity, social dynamics and their consequences for consistency and bias, and suggestions to improve panel peer review ( Lamont and Huutoniemi 2011 ; Lee et al. 2013 ; Pier et al. 2018 ; Roumbanis 2021a , b ; Wessely 1998 ).

Different scientific disciplines show different preferences and beliefs about how to build knowledge and thus have different perceptions of excellence. However, panellists are willing to respect and acknowledge other standards of excellence ( Lamont 2009 ). Evaluation cultures also differ between scientific fields. Science, technology, engineering, and mathematics panels might, in comparison with panellists from social sciences and humanities, be more concerned with the consistency of the assessment across panels and therefore with clear definitions and uses of assessment criteria ( Lamont and Huutoniemi 2011 ). However, much is still to learn about how panellists’ cognitive affiliations with particular disciplines unfold in the evaluation process. Therefore, the assessment of interdisciplinary research is much more complex than just improving the criteria or procedure because less explicit repertoires would also need to change ( Huutoniemi 2012 ).

Social dynamics play a role as panellists may differ in their motivation to engage in allocation processes, which could create bias ( Lee et al. 2013 ). Placing emphasis on meeting established standards or thoroughness in peer review may promote uncontroversial and safe projects, especially in a situation where strong competition puts pressure on experts to reach a consensus ( Langfeldt 2001 ,2006 ). Personal interest and cognitive similarity may also contribute to conservative bias, which could negatively affect controversial or frontier science ( Luukkonen 2012 ; Roumbanis 2021a ; Travis and Collins 1991 ). Central in this part of literature is that panel conclusions are the outcome of and are influenced by the group interaction ( Van Arensbergen et al. 2014a ). Differences in, for example, the status and expertise of the panel members can play an important role in group dynamics. Insights from social psychology on group dynamics can help in understanding and avoiding bias in peer review panels ( Olbrecht and Bornmann 2010 ). For example, group performance research shows that more diverse groups with complementary skills make better group decisions than homogenous groups. Yet, heterogeneity can also increase conflict within the group ( Forsyth 1999 ). Therefore, it is important to pay attention to power dynamics and maintain team spirit and good communication ( Van Arensbergen et al. 2014a ), especially in meetings that include both scientific and non-scientific peers.

The literature also provides funders with starting points to improve the peer review process. For example, the explicitness of review procedures positively influences the decision-making processes ( Langfeldt 2001 ). Strategic voting and decision-making appear to be less frequent in panels that rate than in panels that rank proposals. Also, an advisory instead of a decisional role may improve the quality of the panel assessment ( Lamont and Huutoniemi 2011 ).

Despite different disciplinary evaluative cultures, formal procedures, and criteria, panel members with different backgrounds develop shared customary rules of deliberation that facilitate agreement and help avoid situations of conflict ( Huutoniemi 2012 ; Lamont 2009 ). This is a necessary prerequisite for opening up peer review panels to include non-academic experts. When doing so, it is important to realise that panel review is a social, emotional, and interactional process. It is therefore important to also take these non-cognitive aspects into account when studying cognitive aspects ( Lamont and Guetzkow 2016 ), as we do in this study.

In summary, what we learn from the literature is that (1) the specific criteria to operationalise scientific quality and societal relevance of research are important, (2) the rationalities from Glerup and Horst predict that not everyone values societal aspects and involve non-scientists in peer review to the same extent and in the same way, (3) this may affect the way peer review panels discuss these aspects, and (4) peer review is a challenging group process that could accommodate other rationalities in order to prevent bias towards specific scientific criteria. To disentangle these aspects, we have carried out an observational study of a diverse range of peer review panel sessions using a fixed set of criteria focusing on scientific quality and societal relevance.

3.1 Research assessment at ZonMw and the DHF

The peer review approach and the criteria used by both the DHF and ZonMw are largely comparable. Funding programmes at both organisations start with a brochure describing the purposes, goals, and conditions for research applications, as well as the assessment procedure and criteria. Both organisations apply a two-stage process. In the first phase, reviewers are asked to write a peer review. In the second phase, a panel reviews the application based on the advice of the written reviews and the applicants’ rebuttal. The panels advise the board on eligible proposals for funding including a ranking of these proposals.

There are also differences between the two organisations. At ZonMw, the criteria for societal relevance and quality are operationalised in the ZonMw Framework Fostering Responsible Research Practices ( Reijmerink and Oortwijn 2017 ). This contributes to a common operationalisation of both quality and societal relevance on the level of individual funding programmes. Important elements in the criteria for societal relevance are, for instance, stakeholder participation, (applying) holistic health concepts, and the added value of knowledge in practice, policy, and education. The framework was developed to optimise the funding process from the perspective of knowledge utilisation and includes concepts like productive interactions and Open Science. It is part of the ZonMw Impact Assessment Framework aimed at guiding the planning, monitoring, and evaluation of funding programmes ( Reijmerink et al. 2020 ). At ZonMw, interdisciplinary panels are set up specifically for each funding programme. Panels are interdisciplinary in nature with academics of a wide range of disciplines and often include non-academic peers, like policymakers, health-care professionals, and patients.

At the DHF, the criteria for scientific quality and societal relevance, at the DHF called societal impact , find their origin in the strategy report of the advisory committee CardioVascular Research Netherlands ( Reneman et al. 2010 ). This report forms the basis of the DHF research policy focusing on scientific and societal impact by creating national collaborations in thematic, interdisciplinary research programmes (the so-called consortia) connecting preclinical and clinical expertise into one concerted effort. An International Scientific Advisory Committee (ISAC) was established to assess these thematic consortia. This panel consists of international scientists, primarily with expertise in the broad cardiovascular research field. The DHF criteria for societal impact were redeveloped in 2013 in collaboration with their CSQ. This panel assesses and advises on the societal aspects of proposed studies. The societal impact criteria include the relevance of the health-care problem, the expected contribution to a solution, attention to the next step in science and towards implementation in practice, and the involvement of and interaction with (end-)users of research (R.Y. Abma-Schouten and I.M. Meijer, unpublished data). Peer review panels for consortium funding are generally composed of members of the ISAC, members of the CSQ, and ad hoc panel members relevant to the specific programme. CSQ members often have a pre-meeting before the final panel meetings to prepare and empower CSQ representatives participating in the peer review panel.

3.2 Selection of funding programmes

To compare and evaluate observations between the two organisations, we selected funding programmes that were relatively comparable in scope and aims. The criteria were (1) a translational and/or clinical objective and (2) the selection procedure consisted of review panels that were responsible for the (final) relevance and quality assessment of grant applications. In total, we selected eight programmes: four at each organisation. At the DHF, two programmes were chosen in which the CSQ did not participate to better disentangle the role of the panel composition. For each programme, we observed the selection process varying from one session on one day (taking 2–8 h) to multiple sessions over several days. Ten sessions were observed in total, of which eight were final peer review panel meetings and two were CSQ meetings preparing for the panel meeting.

After management approval for the study in both organisations, we asked programme managers and panel chairpersons of the programmes that were selected for their consent for observation; none refused participation. Panel members were, in a passive consent procedure, informed about the planned observation and anonymous analyses.

To ensure the independence of this evaluation, the selection of the grant programmes, and peer review panels observed, was at the discretion of the project team of this study. The observations and supervision of the analyses were performed by the senior author not affiliated with the funders.

3.3 Observation matrix

Given the lack of a common operationalisation for scientific quality and societal relevance, we decided to use an observation matrix with a fixed set of detailed aspects as a gold standard to score the brochures, the assessment forms, and the arguments used in panel meetings. The matrix used for the observations of the review panels was based upon and adapted from a ‘grant committee observation matrix’ developed by Van Arensbergen. The original matrix informed a literature review on the selection of talent through peer review and the social dynamics in grant review committees ( van Arensbergen et al. 2014b ). The matrix includes four categories of aspects that operationalise societal relevance, scientific quality, committee, and applicant (see  Table 1 ). The aspects of scientific quality and societal relevance were adapted to fit the operationalisation of scientific quality and societal relevance of the organisations involved. The aspects concerning societal relevance were derived from the CSQ criteria, and the aspects concerning scientific quality were based on the scientific criteria of the first panel observed. The four argument types related to the panel were kept as they were. This committee-related category reflects statements that are related to the personal experience or preference of a panel member and can be seen as signals for bias. This category also includes statements that compare a project with another project without further substantiation. The three applicant-related arguments in the original observation matrix were extended with a fourth on social skills in communication with society. We added health technology assessment (HTA) because one programme specifically focused on this aspect. We tested our version of the observation matrix in pilot observations.

Aspects included in the observation matrix and examples of arguments.

Short title of aspects in the observation matrixExamples of arguments
Criterion: scientific quality
Fit in programme objectives‘This disease is underdiagnosed, and undertreated, and therefore fits the criteria of this call very well.’
‘Might have a relevant impact on patient care, but to what extent does it align with the aims of this programme.’
Match science and health-care problem‘It is not properly compared to the current situation (standard of care).’
‘Super relevant application with a fitting plan, perhaps a little too mechanistic.’
International competitiveness‘Something is done all over the world, but they do many more evaluations, however.’
Feasibility of the aims‘… because this is a discovery study the power calculation is difficult, but I would recommend to increase the sample size.’
‘It’s very risky, because this is an exploratory … study without hypotheses.’
‘The aim is to improve …, but there is no control to compare with.’
‘Well substantiated that they are able to achieve the objectives.’
Plan of work‘Will there be enough cases in this cohort?’
‘The budget is no longer correct.’
‘Plan is good, but … doubts about the approach, because too little information….’
Criterion: societal relevance
Health-care problem‘Relevant problem for a small group.’
‘… but is this a serious health condition?’
‘Prevalence is low, but patients do die, morbidity is very high.’
Contribution to solution‘What will this add since we already do…?’
‘It is unclear what the intervention will be after the diagnosis.’
‘Relevance is not good. Side effects are not known and neither is effectiveness.’
Next step in science‘What is needed to go from this retrospective study towards implementation?’
‘It’s not clear whether that work package is necessary or “nice to have”.’
‘Knowledge utilisation paragraph is standard, as used by copywriters.’
Activities towards partners‘What do the applicants do to change the current practice?’
‘Important that the company also contributes financially to the further development.’
‘This proposal includes a good communication plan.’
Participation/diversity‘A user committee is described, but it isn’t well thought through: what is their role?’
‘It’s also important to invite relatives of patients to participate.’
‘They thought really well what their patient group can contribute to the study plan.’
Applicant-related aspects
Scientific publication applicant‘One project leader only has one original paper, …, focus more on other diseases.’
‘Publication output not excellent. Conference papers and posters of local meetings, CV not so strong.’
Background applicant‘… not enough with this expertise involved in the leadership.’
‘Very good CV, … has won many awards.’
‘Candidate is excellent, top 10 to 20 in this field….’
Reputation applicant‘… the main applicant is a hotshot in this field.’
‘Candidate leads cohorts as …, gets a no.’
Societal skills‘Impressed that they took my question seriously, that made my day.’
‘They were very honest about overoptimism in the proposal.’
‘Good group, but they seem quite aware of their own brilliance.’
HTA
HTA‘Concrete revenues are negative, however improvement in quality-adjusted life years but very shaky.’
Committee-related aspects
Personal experience with the applicant‘This researcher only wants to acquire knowledge, nothing further.’
‘I reviewed him before and he is not very good at interviews.’
Personal/unasserted preference‘Excellent presentation, much better than the application.’ (Without further elaboration)
‘This academic lab has advantages, but also disadvantages with regard to independence.’
‘If it can be done anywhere, it is in this group.’
Relation with applicants’ institute/network‘May come up with new models, they’re linked with a group in … who can do this very well.’
Comparison with other applications‘What is the relevance compared to the other proposal? They do something similar.’
‘Look at the proposals as a whole, portfolio, we have clinical and we have fundamental.’
Short title of aspects in the observation matrixExamples of arguments
Criterion: scientific quality
Fit in programme objectives‘This disease is underdiagnosed, and undertreated, and therefore fits the criteria of this call very well.’
‘Might have a relevant impact on patient care, but to what extent does it align with the aims of this programme.’
Match science and health-care problem‘It is not properly compared to the current situation (standard of care).’
‘Super relevant application with a fitting plan, perhaps a little too mechanistic.’
International competitiveness‘Something is done all over the world, but they do many more evaluations, however.’
Feasibility of the aims‘… because this is a discovery study the power calculation is difficult, but I would recommend to increase the sample size.’
‘It’s very risky, because this is an exploratory … study without hypotheses.’
‘The aim is to improve …, but there is no control to compare with.’
‘Well substantiated that they are able to achieve the objectives.’
Plan of work‘Will there be enough cases in this cohort?’
‘The budget is no longer correct.’
‘Plan is good, but … doubts about the approach, because too little information….’
Criterion: societal relevance
Health-care problem‘Relevant problem for a small group.’
‘… but is this a serious health condition?’
‘Prevalence is low, but patients do die, morbidity is very high.’
Contribution to solution‘What will this add since we already do…?’
‘It is unclear what the intervention will be after the diagnosis.’
‘Relevance is not good. Side effects are not known and neither is effectiveness.’
Next step in science‘What is needed to go from this retrospective study towards implementation?’
‘It’s not clear whether that work package is necessary or “nice to have”.’
‘Knowledge utilisation paragraph is standard, as used by copywriters.’
Activities towards partners‘What do the applicants do to change the current practice?’
‘Important that the company also contributes financially to the further development.’
‘This proposal includes a good communication plan.’
Participation/diversity‘A user committee is described, but it isn’t well thought through: what is their role?’
‘It’s also important to invite relatives of patients to participate.’
‘They thought really well what their patient group can contribute to the study plan.’
Applicant-related aspects
Scientific publication applicant‘One project leader only has one original paper, …, focus more on other diseases.’
‘Publication output not excellent. Conference papers and posters of local meetings, CV not so strong.’
Background applicant‘… not enough with this expertise involved in the leadership.’
‘Very good CV, … has won many awards.’
‘Candidate is excellent, top 10 to 20 in this field….’
Reputation applicant‘… the main applicant is a hotshot in this field.’
‘Candidate leads cohorts as …, gets a no.’
Societal skills‘Impressed that they took my question seriously, that made my day.’
‘They were very honest about overoptimism in the proposal.’
‘Good group, but they seem quite aware of their own brilliance.’
HTA
HTA‘Concrete revenues are negative, however improvement in quality-adjusted life years but very shaky.’
Committee-related aspects
Personal experience with the applicant‘This researcher only wants to acquire knowledge, nothing further.’
‘I reviewed him before and he is not very good at interviews.’
Personal/unasserted preference‘Excellent presentation, much better than the application.’ (Without further elaboration)
‘This academic lab has advantages, but also disadvantages with regard to independence.’
‘If it can be done anywhere, it is in this group.’
Relation with applicants’ institute/network‘May come up with new models, they’re linked with a group in … who can do this very well.’
Comparison with other applications‘What is the relevance compared to the other proposal? They do something similar.’
‘Look at the proposals as a whole, portfolio, we have clinical and we have fundamental.’

3.4 Observations

Data were primarily collected through observations. Our observations of review panel meetings were non-participatory: the observer and goal of the observation were introduced at the start of the meeting, without further interactions during the meeting. To aid in the processing of observations, some meetings were audiotaped (sound only). Presentations or responses of applicants were not noted and were not part of the analysis. The observer made notes on the ongoing discussion and scored the arguments while listening. One meeting was not attended in person and only observed and scored by listening to the audiotape recording. Because this made identification of the panel members unreliable, this panel meeting was excluded from the analysis of the third research question on how arguments used differ between panel members with different perspectives.

3.5 Grant programmes and the assessment criteria

We gathered and analysed all brochures and assessment forms used by the review panels in order to answer our second research question on the correspondence of arguments used with the formal criteria. Several programmes consisted of multiple grant calls: in that case, the specific call brochure was gathered and analysed, not the overall programme brochure. Additional documentation (e.g. instructional presentations at the start of the panel meeting) was not included in the document analysis. All included documents were marked using the aforementioned observation matrix. The panel-related arguments were not used because this category reflects the personal arguments of panel members that are not part of brochures or instructions. To avoid potential differences in scoring methods, two of the authors independently scored half of the documents that were checked and validated afterwards by the other. Differences were discussed until a consensus was reached.

3.6 Panel composition

In order to answer the third research question, background information on panel members was collected. We categorised the panel members into five common types of panel members: scientific, clinical scientific, health-care professional/clinical, patient, and policy. First, a list of all panel members was composed including their scientific and professional backgrounds and affiliations. The theoretical notion that reviewers represent different types of users of research and therefore potential impact domains (academic, social, economic, and cultural) was leading in the categorisation ( Meijer 2012 ; Spaapen and Van Drooge 2011 ). Because clinical researchers play a dual role in both advancing research as a fellow academic and as a user of the research output in health-care practice, we divided the academic members into two categories of non-clinical and clinical researchers. Multiple types of professional actors participated in each review panel. These were divided into two groups for the analysis: health-care professionals (without current academic activity) and policymakers in the health-care sector. No representatives of the private sector participated in the observed review panels. From the public domain, (at-risk) patients and patient representatives were part of several review panels. Only publicly available information was used to classify the panel members. Members were assigned to one category only: categorisation took place based on the specific role and expertise for which they were appointed to the panel.

In two of the four DHF programmes, the assessment procedure included the CSQ. In these two programmes, representatives of this CSQ participated in the scientific panel to articulate the findings of the CSQ meeting during the final assessment meeting. Two grant programmes were assessed by a review panel with solely (clinical) scientific members.

3.7 Analysis

Data were processed using ATLAS.ti 8 and Microsoft Excel 2010 to produce descriptive statistics. All observed arguments were coded and given a randomised identification code for the panel member using that particular argument. The number of times an argument type was observed was used as an indicator for the relative importance of that argument in the appraisal of proposals. With this approach, a practical and reproducible method for research funders to evaluate the effect of policy changes on peer review was developed. If codes or notes were unclear, post-observation validation of codes was carried out based on observation matrix notes. Arguments that were noted by the observer but could not be matched with an existing code were first coded as a ‘non-existing’ code, and these were resolved by listening back to the audiotapes. Arguments that could not be assigned to a panel member were assigned a ‘missing panel member’ code. A total of 4.7 per cent of all codes were assigned a ‘missing panel member’ code.

After the analyses, two meetings were held to reflect on the results: one with the CSQ and the other with the programme coordinators of both organisations. The goal of these meetings was to improve our interpretation of the findings, disseminate the results derived from this project, and identify topics for further analyses or future studies.

3.8 Limitations

Our study focuses on studying the final phase of the peer review process of research applications in a real-life setting. Our design, a non-participant observation of peer review panels, also introduced several challenges ( Liu and Maitlis 2010 ).

First, the independent review phase or pre-application phase was not part of our study. We therefore could not assess to what extent attention to certain aspects of scientific quality or societal relevance and impact in the review phase influenced the topics discussed during the meeting.

Second, the most important challenge of overt non-participant observations is the observer effect: the danger of causing reactivity in those under study. We believe that the consequences of this effect on our conclusions were limited because panellists are used to external observers in the meetings of these two funders. The observer briefly explained the goal of the study during the introductory round of the panel in general terms. The observer sat as unobtrusively as possible and avoided reactivity to discussions. Similar to previous observations of panels, we experienced that the fact that an observer was present faded into the background during a meeting ( Roumbanis 2021a ). However, a limited observer effect can never be entirely excluded.

Third, our design to only score the arguments raised, and not the responses of the applicant, or information on the content of the proposals, has its positives and negatives. With this approach, we could assure the anonymity of the grant procedures reviewed, the applicants and proposals, panels, and individual panellists. This was an important condition for the funders involved. We took the frequency arguments used as a proxy for the relative importance of that argument in decision-making, which undeniably also has its caveats. Our data collection approach limits more in-depth reflection on which arguments were decisive in decision-making and on group dynamics during the interaction with the applicants as non-verbal and non-content-related comments were not captured in this study.

Fourth, despite this being one of the largest observational studies on the peer review assessment of grant applications with the observation of ten panels in eight grant programmes, many variables might explain differences in arguments used within and beyond our view. Examples of ‘confounding’ variables are the many variations in panel composition, the differences in objectives of the programmes, and the range of the funding programmes. Our study should therefore be seen as exploratory and thus warrants caution in drawing conclusions.

4.1 Overview of observational data

The grant programmes included in this study reflected a broad range of biomedical and health funding programmes, ranging from fellowship grants to translational research and applied health research. All formal documents available to the applicants and to the review panel were retrieved for both ZonMw and the DHF. In total, eighteen documents corresponding to the eight grant programmes were studied. The number of proposals assessed per programme varied from three to thirty-three. The duration of the panel meetings varied between 2 h and two consecutive days. Together, this resulted in a large spread in the number of total arguments used in an individual meeting and in a grant programme as a whole. In the shortest meeting, 49 arguments were observed versus 254 in the longest, with a mean of 126 arguments per meeting and on average 15 arguments per proposal.

We found consistency between how criteria were operationalised in the grant programme’s brochures and in the assessment forms of the review panels overall. At the same time, because the number of elements included in the observation matrix is limited, there was a considerable diversity in the arguments that fall within each aspect (see examples in  Table 1 ). Some of these differences could possibly be explained by differences in language used and the level of detail in the observation matrix, the brochure, and the panel’s instructions. This was especially the case in the applicant-related aspects in which the observation matrix was more detailed than the text in the brochure and assessment forms.

In interpretating our findings, it is important to take into account that, even though our data were largely complete and the observation matrix matched well with the description of the criteria in the brochures and assessment forms, there was a large diversity in the type and number of arguments used and in the number of proposals assessed in the grant programmes included in our study.

4.2 Wide range of arguments used by panels: scientific arguments used most

For our first research question, we explored the number and type of arguments used in the panel meetings. Figure 1 provides an overview of the arguments used. Scientific quality was discussed most. The number of times the feasibility of the aims was discussed clearly stands out in comparison to all other arguments. Also, the match between the science and the problem studied and the plan of work were frequently discussed aspects of scientific quality. International competitiveness of the proposal was discussed the least of all five scientific arguments.

The number of arguments used in panel meetings.

The number of arguments used in panel meetings.

Attention was paid to societal relevance and impact in the panel meetings of both organisations. Yet, the language used differed somewhat between organisations. The contribution to a solution and the next step in science were the most often used societal arguments. At ZonMw, the impact of the health-care problem studied and the activities towards partners were less frequently discussed than the other three societal arguments. At the DHF, the five societal arguments were used equally often.

With the exception of the fellowship programme meeting, applicant-related arguments were not often used. The fellowship panel used arguments related to the applicant and to scientific quality about equally often. Committee-related arguments were also rarely used in the majority of the eight grant programmes observed. In three out of the ten panel meetings, one or two arguments were observed, which were related to personal experience with the applicant or their direct network. In seven out of ten meetings, statements were observed, which were unasserted or were explicitly announced as reflecting a personal preference. The frequency varied between one and seven statements (sixteen in total), which is low in comparison to the other arguments used (see  Fig. 1 for examples).

4.3 Use of arguments varied strongly per panel meeting

The balance in the use of scientific and societal arguments varied strongly per grant programme, panel, and organisation. At ZonMw, two meetings had approximately an equal balance in societal and scientific arguments. In the other two meetings, scientific arguments were used twice to four times as often as societal arguments. At the DHF, three types of panels were observed. Different patterns in the relative use of societal and scientific arguments were observed for each of these panel types. In the two CSQ-only meetings the societal arguments were used approximately twice as often as scientific arguments. In the two meetings of the scientific panels, societal arguments were infrequently used (between zero and four times per argument category). In the combined societal and scientific panel meetings, the use of societal and scientific arguments was more balanced.

4.4 Match of arguments used by panels with the assessment criteria

In order to answer our second research question, we looked into the relation of the arguments used with the formal criteria. We observed that a broader range of arguments were often used in comparison to how the criteria were described in the brochure and assessment instruction. However, arguments related to aspects that were consequently included in the brochure and instruction seemed to be discussed more frequently than in programmes where those aspects were not consistently included or were not included at all. Although the match of the science with the health-care problem and the background and reputation of the applicant were not always made explicit in the brochure or instructions, they were discussed in many panel meetings. Supplementary Fig. S1 provides a visualisation of how arguments used differ between the programmes in which those aspects were, were not, consistently included in the brochure and instruction forms.

4.5 Two-thirds of the assessment was driven by scientific panel members

To answer our third question, we looked into the differences in arguments used between panel members representing a scientific, clinical scientific, professional, policy, or patient perspective. In each research programme, the majority of panellists had a scientific background ( n  = 35), thirty-four members had a clinical scientific background, twenty had a health professional/clinical background, eight members represented a policy perspective, and fifteen represented a patient perspective. From the total number of arguments (1,097), two-thirds were made by members with a scientific or clinical scientific perspective. Members with a scientific background engaged most actively in the discussion with a mean of twelve arguments per member. Similarly, clinical scientists and health-care professionals participated with a mean of nine arguments, and members with a policy and patient perspective put forward the least number of arguments on average, namely, seven and eight. Figure 2 provides a complete overview of the total and mean number of arguments used by the different disciplines in the various panels.

The total and mean number of arguments displayed per subgroup of panel members.

The total and mean number of arguments displayed per subgroup of panel members.

4.6 Diverse use of arguments by panellists, but background matters

In meetings of both organisations, we observed a diverse use of arguments by the panel members. Yet, the use of arguments varied depending on the background of the panel member (see  Fig. 3 ). Those with a scientific and clinical scientific perspective used primarily scientific arguments. As could be expected, health-care professionals and patients used societal arguments more often.

The use of arguments differentiated by panel member background.

The use of arguments differentiated by panel member background.

Further breakdown of arguments across backgrounds showed clear differences in the use of scientific arguments between the different disciplines of panellists. Scientists and clinical scientists discussed the feasibility of the aims more than twice as often as their second most often uttered element of scientific quality, which was the match between the science and the problem studied . Patients and members with a policy or health professional background put forward fewer but more varied scientific arguments.

Patients and health-care professionals accounted for approximately half of the societal arguments used, despite being a much smaller part of the panel’s overall composition. In other words, members with a scientific perspective were less likely to use societal arguments. The relevance of the health-care problem studied, activities towards partners , and arguments related to participation and diversity were not used often by this group. Patients often used arguments related to patient participation and diversity and activities towards partners , although the frequency of the use of the latter differed per organisation.

The majority of the applicant-related arguments were put forward by scientists, including clinical scientists. Committee-related arguments were very rare and are therefore not differentiated by panel member background, except comments related to a comparison with other applications. These arguments were mainly put forward by panel members with a scientific background. HTA -related arguments were often used by panel members with a scientific perspective. Panel members with other perspectives used this argument scarcely (see Supplementary Figs S2–S4 for the visual presentation of the differences between panel members on all aspects included in the matrix).

5.1 Explanations for arguments used in panels

Our observations show that most arguments for scientific quality were often used. However, except for the feasibility , the frequency of arguments used varied strongly between the meetings and between the individual proposals that were discussed. The fact that most arguments were not consistently used is not surprising given the results from previous studies that showed heterogeneity in grant application assessments and low consistency in comments and scores by independent reviewers ( Abdoul et al. 2012 ; Pier et al. 2018 ). In an analysis of written assessments on nine observed dimensions, no dimension was used in more than 45 per cent of the reviews ( Hartmann and Neidhardt 1990 ).

There are several possible explanations for this heterogeneity. Roumbanis (2021a) described how being responsive to the different challenges in the proposals and to the points of attention arising from the written assessments influenced discussion in panels. Also when a disagreement arises, more time is spent on discussion ( Roumbanis 2021a ). One could infer that unambiguous, and thus not debated, aspects might remain largely undetected in our study. We believe, however, that the main points relevant to the assessment will not remain entirely unmentioned, because most panels in our study started the discussion with a short summary of the proposal, the written assessment, and the rebuttal. Lamont (2009) , however, points out that opening statements serve more goals than merely decision-making. They can also increase the credibility of the panellist, showing their comprehension and balanced assessment of an application. We can therefore not entirely disentangle whether the arguments observed most were also found to be most important or decisive or those were simply the topics that led to most disagreement.

An interesting difference with Roumbanis’ study was the available discussion time per proposal. In our study, most panels handled a limited number of proposals, allowing for longer discussions in comparison with the often 2-min time frame that Roumbanis (2021b) described, potentially contributing to a wider range of arguments being discussed. Limited time per proposal might also limit the number of panellists contributing to the discussion per proposal ( De Bont 2014 ).

5.2 Reducing heterogeneity by improving operationalisation and the consequent use of assessment criteria

We found that the language used for the operationalisation of the assessment criteria in programme brochures and in the observation matrix was much more detailed than in the instruction for the panel, which was often very concise. The exercise also illustrated that many terms were used interchangeably.

This was especially true for the applicant-related aspects. Several panels discussed how talent should be assessed. This confusion is understandable when considering the changing values in research and its assessment ( Moher et al. 2018 ) and the fact that the instruction of the funders was very concise. For example, it was not explicated whether the individual or the team should be assessed. Arensbergen et al. (2014b) described how in grant allocation processes, talent is generally assessed using limited characteristics. More objective and quantifiable outputs often prevailed at the expense of recognising and rewarding a broad variety of skills and traits combining professional, social, and individual capital ( DORA 2013 ).

In addition, committee-related arguments, like personal experiences with the applicant or their institute, were rarely used in our study. Comparisons between proposals were sometimes made without further argumentation, mainly by scientific panel members. This was especially pronounced in one (fellowship) grant programme with a high number of proposals. In this programme, the panel meeting concentrated on quickly comparing the quality of the applicants and of the proposals based on the reviewer’s judgement, instead of a more in-depth discussion of the different aspects of the proposals. Because the review phase was not part of this study, the question of which aspects have been used for the assessment of the proposals in this panel therefore remains partially unanswered. However, weighing and comparing proposals on different aspects and with different inputs is a core element of scientific peer review, both in the review of papers and in the review of grants ( Hirschauer 2010 ). The large role of scientific panel members in comparing proposals is therefore not surprising.

One could anticipate that more consequent language in the operationalising criteria may lead to more clarity for both applicants and panellists and to more consistency in the assessment of research proposals. The trend in our observations was that arguments were used less when the related criteria were not or were consequently included in the brochure and panel instruction. It remains, however, challenging to disentangle the influence of the formal definitions of criteria on the arguments used. Previous studies also encountered difficulties in studying the role of the formal instruction in peer review but concluded that this role is relatively limited ( Langfeldt 2001 ; Reinhart 2010 ).

The lack of a clear operationalisation of criteria can contribute to heterogeneity in peer review as many scholars found that assessors differ in the conceptualisation of good science and to the importance they attach to various aspects of research quality and societal relevance ( Abdoul et al. 2012 ; Geurts 2016 ; Scholten et al. 2018 ; Van den Brink et al. 2016 ). The large variation and absence of a gold standard in the interpretation of scientific quality and societal relevance affect the consistency of peer review. As a consequence, it is challenging to systematically evaluate and improve peer review in order to fund the research that contributes most to science and society. To contribute to responsible research and innovation, it is, therefore, important that funders invest in a more consistent and conscientious peer review process ( Curry et al. 2020 ; DORA 2013 ).

A common conceptualisation of scientific quality and societal relevance and impact could improve the alignment between views on good scientific conduct, programmes’ objectives, and the peer review in practice. Such a conceptualisation could contribute to more transparency and quality in the assessment of research. By involving panel members from all relevant backgrounds, including the research community, health-care professionals, and societal actors, in a better operationalisation of criteria, more inclusive views of good science can be implemented more systematically in the peer review assessment of research proposals. The ZonMw Framework Fostering Responsible Research Practices is an example of an initiative aiming to support standardisation and integration ( Reijmerink et al. 2020 ).

Given the lack of a common definition or conceptualisation of scientific quality and societal relevance, our study made an important decision by choosing to use a fixed set of detailed aspects of two important criteria as a gold standard to score the brochures, the panel instructions, and the arguments used by the panels. This approach proved helpful in disentangling the different components of scientific quality and societal relevance. Having said that, it is important not to oversimplify the causes for heterogeneity in peer review because these substantive arguments are not independent of non-cognitive, emotional, or social aspects ( Lamont and Guetzkow 2016 ; Reinhart 2010 ).

5.3 Do more diverse panels contribute to a broader use of arguments?

Both funders participating in our study have an outspoken public mission that requests sufficient attention to societal aspects in assessment processes. In reality, as observed in several panels, the main focus of peer review meetings is on scientific arguments. Next to the possible explanations earlier, the composition of the panel might play a role in explaining arguments used in panel meetings. Our results have shown that health-care professionals and patients bring in more societal arguments than scientists, including those who are also clinicians. It is, however, not that simple. In the more diverse panels, panel members, regardless of their backgrounds, used more societal arguments than in the less diverse panels.

Observing ten panel meetings was sufficient to explore differences in arguments used by panel members with different backgrounds. The pattern of (primarily) scientific arguments being raised by panels with mainly scientific members is not surprising. After all, it is their main task to assess the scientific content of grant proposals and fit their competencies. As such, one could argue, depending on how one justifies the relationship between science and society, that health-care professionals and patients might be better suited to assess the value for potential users of research results. Scientific panel members and clinical scientists in our study used less arguments that reflect on opening up and connecting science directly to others who can bring it further (being industry, health-care professionals, or other stakeholders). Patients filled this gap since these two types of arguments were the most prevalent type put forward by them. Making an active connection with society apparently needs a broader, more diverse panel for scientists to direct their attention to more societal arguments. Evident from our observations is that in panels with patients and health-care professionals, their presence seemed to increase the attention placed on arguments beyond the scientific arguments put forward by all panel members, including scientists. This conclusion is congruent with the observation that there was a more equal balance in the use of societal and scientific arguments in the scientific panels in which the CSQ participated. This illustrates that opening up peer review panels to non-scientific members creates an opportunity to focus on both the contribution and the integrative rationality ( Glerup and Horst 2014 ) or, in other words, to allow productive interactions between scientific and non-scientific actors. This corresponds with previous research that suggests that with regard to societal aspects, reviews from mixed panels were broader and richer ( Luo et al. 2021 ). In panels with non-scientific experts, more emphasis was placed on the role of the proposed research process to increase the likelihood of societal impact over the causal importance of scientific excellence for broader impacts. This is in line with the findings that panels with more disciplinary diversity, in range and also by including generalist experts, applied more versatile styles to reach consensus and paid more attention to relevance and pragmatic value ( Huutoniemi 2012 ).

Our observations further illustrate that patients and health-care professionals were less vocal in panels than (clinical) scientists and were in the minority. This could reflect their social role and lower perceived authority in the panel. Several guides are available for funders to stimulate the equal participation of patients in science. These guides are also applicable to their involvement in peer review panels. Measures to be taken include the support and training to help prepare patients for their participation in deliberations with renowned scientists and explicitly addressing power differences ( De Wit et al. 2016 ). Panel chairs and programme officers have to set and supervise the conditions for the functioning of both the individual panel members and the panel as a whole ( Lamont 2009 ).

5.4 Suggestions for future studies

In future studies, it is important to further disentangle the role of the operationalisation and appraisal of assessment criteria in reducing heterogeneity in the arguments used by panels. More controlled experimental settings are a valuable addition to the current mainly observational methodologies applied to disentangle some of the cognitive and social factors that influence the functioning and argumentation of peer review panels. Reusing data from the panel observations and the data on the written reports could also provide a starting point for a bottom-up approach to create a more consistent and shared conceptualisation and operationalisation of assessment criteria.

To further understand the effects of opening up review panels to non-scientific peers, it is valuable to compare the role of diversity and interdisciplinarity in solely scientific panels versus panels that also include non-scientific experts.

In future studies, differences between domains and types of research should also be addressed. We hypothesise that biomedical and health research is perhaps more suited for the inclusion of non-scientific peers in panels than other research domains. For example, it is valuable to better understand how potentially relevant users can be well enough identified in other research fields and to what extent non-academics can contribute to assessing the possible value of, especially early or blue sky, research.

The goal of our study was to explore in practice which arguments regarding the main criteria of scientific quality and societal relevance were used by peer review panels of biomedical and health research funding programmes. We showed that there is a wide diversity in the number and range of arguments used, but three main scientific aspects were discussed most frequently. These are the following: is it a feasible approach; does the science match the problem , and is the work plan scientifically sound? Nevertheless, these scientific aspects were accompanied by a significant amount of discussion of societal aspects, of which the contribution to a solution is the most prominent. In comparison with scientific panellists, non-scientific panellists, such as health-care professionals, policymakers, and patients, often use a wider range of arguments and other societal arguments. Even more striking was that, even though non-scientific peers were often outnumbered and less vocal in panels, scientists also used a wider range of arguments when non-scientific peers were present.

It is relevant that two health research funders collaborated in the current study to reflect on and improve peer review in research funding. There are few studies published that describe live observations of peer review panel meetings. Many studies focus on alternatives for peer review or reflect on the outcomes of the peer review process, instead of reflecting on the practice and improvement of peer review assessment of grant proposals. Privacy and confidentiality concerns of funders also contribute to the lack of information on the functioning of peer review panels. In this study, both organisations were willing to participate because of their interest in research funding policies in relation to enhancing the societal value and impact of science. The study provided them with practical suggestions, for example, on how to improve the alignment in language used in programme brochures and instructions of review panels, and contributed to valuable knowledge exchanges between organisations. We hope that this publication stimulates more research funders to evaluate their peer review approach in research funding and share their insights.

For a long time, research funders relied solely on scientists for designing and executing peer review of research proposals, thereby delegating responsibility for the process. Although review panels have a discretionary authority, it is important that funders set and supervise the process and the conditions. We argue that one of these conditions should be the diversification of peer review panels and opening up panels for non-scientific peers.

Supplementary material is available at Science and Public Policy online.

Details of the data and information on how to request access is available from the first author.

Joey Gijbels and Wendy Reijmerink are employed by ZonMw. Rebecca Abma-Schouten is employed by the Dutch Heart Foundation and as external PhD candidate affiliated with the Centre for Science and Technology Studies, Leiden University.

A special thanks to the panel chairs and programme officers of ZonMw and the DHF for their willingness to participate in this project. We thank Diny Stekelenburg, an internship student at ZonMw, for her contributions to the project. Our sincerest gratitude to Prof. Paul Wouters, Sarah Coombs, and Michiel van der Vaart for proofreading and their valuable feedback. Finally, we thank the editors and anonymous reviewers of Science and Public Policy for their thorough and insightful reviews and recommendations. Their contributions are recognisable in the final version of this paper.

Abdoul   H. , Perrey   C. , Amiel   P. , et al.  ( 2012 ) ‘ Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices ’, PLoS One , 7 : 1 – 15 .

Google Scholar

Abma-Schouten   R. Y. ( 2017 ) ‘ Maatschappelijke Kwaliteit van Onderzoeksvoorstellen ’, Dutch Heart Foundation .

Alla   K. , Hall   W. D. , Whiteford   H. A. , et al.  ( 2017 ) ‘ How Do We Define the Policy Impact of Public Health Research? A Systematic Review ’, Health Research Policy and Systems , 15 : 84.

Benedictus   R. , Miedema   F. , and Ferguson   M. W. J. ( 2016 ) ‘ Fewer Numbers, Better Science ’, Nature , 538 : 453 – 4 .

Chalmers   I. , Bracken   M. B. , Djulbegovic   B. , et al.  ( 2014 ) ‘ How to Increase Value and Reduce Waste When Research Priorities Are Set ’, The Lancet , 383 : 156 – 65 .

Curry   S. , De Rijcke   S. , Hatch   A. , et al.  ( 2020 ) ‘ The Changing Role of Funders in Responsible Research Assessment: Progress, Obstacles and the Way Ahead ’, RoRI Working Paper No. 3, London : Research on Research Institute (RoRI) .

De Bont   A. ( 2014 ) ‘ Beoordelen Bekeken. Reflecties op het Werk van Een Programmacommissie van ZonMw ’, ZonMw .

De Rijcke   S. , Wouters   P. F. , Rushforth   A. D. , et al.  ( 2016 ) ‘ Evaluation Practices and Effects of Indicator Use—a Literature Review ’, Research Evaluation , 25 : 161 – 9 .

De Wit   A. M. , Bloemkolk   D. , Teunissen   T. , et al.  ( 2016 ) ‘ Voorwaarden voor Succesvolle Betrokkenheid van Patiënten/cliënten bij Medisch Wetenschappelijk Onderzoek ’, Tijdschrift voor Sociale Gezondheidszorg , 94 : 91 – 100 .

Del Carmen Calatrava Moreno   M. , Warta   K. , Arnold   E. , et al.  ( 2019 ) Science Europe Study on Research Assessment Practices . Technopolis Group Austria .

Google Preview

Demicheli   V. and Di Pietrantonj   C. ( 2007 ) ‘ Peer Review for Improving the Quality of Grant Applications ’, Cochrane Database of Systematic Reviews , 2 : MR000003.

Den Oudendammer   W. M. , Noordhoek   J. , Abma-Schouten   R. Y. , et al.  ( 2019 ) ‘ Patient Participation in Research Funding: An Overview of When, Why and How Amongst Dutch Health Funds ’, Research Involvement and Engagement , 5 .

Diabetesfonds ( n.d. ) Maatschappelijke Adviesraad < https://www.diabetesfonds.nl/over-ons/maatschappelijke-adviesraad > accessed 18 Sept 2022 .

Dijstelbloem   H. , Huisman   F. , Miedema   F. , et al.  ( 2013 ) ‘ Science in Transition Position Paper: Waarom de Wetenschap Niet Werkt Zoals het Moet, En Wat Daar aan te Doen Is ’, Utrecht : Science in Transition .

Forsyth   D. R. ( 1999 ) Group Dynamics , 3rd edn. Belmont : Wadsworth Publishing Company .

Geurts   J. ( 2016 ) ‘ Wat Goed Is, Herken Je Meteen ’, NRC Handelsblad < https://www.nrc.nl/nieuws/2016/10/28/wat-goed-is-herken-je-meteen-4975248-a1529050 > accessed 6 Mar 2022 .

Glerup   C. and Horst   M. ( 2014 ) ‘ Mapping “Social Responsibility” in Science ’, Journal of Responsible Innovation , 1 : 31 – 50 .

Hartmann   I. and Neidhardt   F. ( 1990 ) ‘ Peer Review at the Deutsche Forschungsgemeinschaft ’, Scientometrics , 19 : 419 – 25 .

Hirschauer   S. ( 2010 ) ‘ Editorial Judgments: A Praxeology of “Voting” in Peer Review ’, Social Studies of Science , 40 : 71 – 103 .

Hughes   A. and Kitson   M. ( 2012 ) ‘ Pathways to Impact and the Strategic Role of Universities: New Evidence on the Breadth and Depth of University Knowledge Exchange in the UK and the Factors Constraining Its Development ’, Cambridge Journal of Economics , 36 : 723 – 50 .

Huutoniemi   K. ( 2012 ) ‘ Communicating and Compromising on Disciplinary Expertise in the Peer Review of Research Proposals ’, Social Studies of Science , 42 : 897 – 921 .

Jasanoff   S. ( 2011 ) ‘ Constitutional Moments in Governing Science and Technology ’, Science and Engineering Ethics , 17 : 621 – 38 .

Kolarz   P. , Arnold   E. , Farla   K. , et al.  ( 2016 ) Evaluation of the ESRC Transformative Research Scheme . Brighton : Technopolis Group .

Lamont   M. ( 2009 ) How Professors Think : Inside the Curious World of Academic Judgment . Cambridge : Harvard University Press .

Lamont   M. Guetzkow   J. ( 2016 ) ‘How Quality Is Recognized by Peer Review Panels: The Case of the Humanities’, in M.   Ochsner , S. E.   Hug , and H.-D.   Daniel (eds) Research Assessment in the Humanities , pp. 31 – 41 . Cham : Springer International Publishing .

Lamont   M. Huutoniemi   K. ( 2011 ) ‘Comparing Customary Rules of Fairness: Evaluative Practices in Various Types of Peer Review Panels’, in C.   Charles   G.   Neil and L.   Michèle (eds) Social Knowledge in the Making , pp. 209–32. Chicago : The University of Chicago Press .

Langfeldt   L. ( 2001 ) ‘ The Decision-making Constraints and Processes of Grant Peer Review, and Their Effects on the Review Outcome ’, Social Studies of Science , 31 : 820 – 41 .

——— ( 2006 ) ‘ The Policy Challenges of Peer Review: Managing Bias, Conflict of Interests and Interdisciplinary Assessments ’, Research Evaluation , 15 : 31 – 41 .

Lee   C. J. , Sugimoto   C. R. , Zhang   G. , et al.  ( 2013 ) ‘ Bias in Peer Review ’, Journal of the American Society for Information Science and Technology , 64 : 2 – 17 .

Liu   F. Maitlis   S. ( 2010 ) ‘Nonparticipant Observation’, in A. J.   Mills , G.   Durepos , and E.   Wiebe (eds) Encyclopedia of Case Study Research , pp. 609 – 11 . Los Angeles : SAGE .

Luo   J. , Ma   L. , and Shankar   K. ( 2021 ) ‘ Does the Inclusion of Non-academix Reviewers Make Any Difference for Grant Impact Panels? ’, Science & Public Policy , 48 : 763 – 75 .

Luukkonen   T. ( 2012 ) ‘ Conservatism and Risk-taking in Peer Review: Emerging ERC Practices ’, Research Evaluation , 21 : 48 – 60 .

Macleod   M. R. , Michie   S. , Roberts   I. , et al.  ( 2014 ) ‘ Biomedical Research: Increasing Value, Reducing Waste ’, The Lancet , 383 : 101 – 4 .

Meijer   I. M. ( 2012 ) ‘ Societal Returns of Scientific Research. How Can We Measure It? ’, Leiden : Center for Science and Technology Studies, Leiden University .

Merton   R. K. ( 1968 ) Social Theory and Social Structure , Enlarged edn. [Nachdr.] . New York : The Free Press .

Moher   D. , Naudet   F. , Cristea   I. A. , et al.  ( 2018 ) ‘ Assessing Scientists for Hiring, Promotion, And Tenure ’, PLoS Biology , 16 : e2004089.

Olbrecht   M. and Bornmann   L. ( 2010 ) ‘ Panel Peer Review of Grant Applications: What Do We Know from Research in Social Psychology on Judgment and Decision-making in Groups? ’, Research Evaluation , 19 : 293 – 304 .

Patiëntenfederatie Nederland ( n.d. ) Ervaringsdeskundigen Referentenpanel < https://www.patientenfederatie.nl/zet-je-ervaring-in/lid-worden-van-ons-referentenpanel > accessed 18 Sept 2022.

Pier   E. L. , M.   B. , Filut   A. , et al.  ( 2018 ) ‘ Low Agreement among Reviewers Evaluating the Same NIH Grant Applications ’, Proceedings of the National Academy of Sciences , 115 : 2952 – 7 .

Prinses Beatrix Spierfonds ( n.d. ) Gebruikerscommissie < https://www.spierfonds.nl/wie-wij-zijn/gebruikerscommissie > accessed 18 Sep 2022 .

( 2020 ) Private Non-profit Financiering van Onderzoek in Nederland < https://www.rathenau.nl/nl/wetenschap-cijfers/geld/wat-geeft-nederland-uit-aan-rd/private-non-profit-financiering-van#:∼:text=R%26D%20in%20Nederland%20wordt%20gefinancierd,aan%20wetenschappelijk%20onderzoek%20in%20Nederland > accessed 6 Mar 2022 .

Reneman   R. S. , Breimer   M. L. , Simoons   J. , et al.  ( 2010 ) ‘ De toekomst van het cardiovasculaire onderzoek in Nederland. Sturing op synergie en impact ’, Den Haag : Nederlandse Hartstichting .

Reed   M. S. , Ferré   M. , Marin-Ortega   J. , et al.  ( 2021 ) ‘ Evaluating Impact from Research: A Methodological Framework ’, Research Policy , 50 : 104147.

Reijmerink   W. and Oortwijn   W. ( 2017 ) ‘ Bevorderen van Verantwoorde Onderzoekspraktijken Door ZonMw ’, Beleidsonderzoek Online. accessed 6 Mar 2022.

Reijmerink   W. , Vianen   G. , Bink   M. , et al.  ( 2020 ) ‘ Ensuring Value in Health Research by Funders’ Implementation of EQUATOR Reporting Guidelines: The Case of ZonMw ’, Berlin : REWARD|EQUATOR .

Reinhart   M. ( 2010 ) ‘ Peer Review Practices: A Content Analysis of External Reviews in Science Funding ’, Research Evaluation , 19 : 317 – 31 .

Reinhart   M. and Schendzielorz   C. ( 2021 ) Trends in Peer Review . SocArXiv . < https://osf.io/preprints/socarxiv/nzsp5 > accessed 29 Aug 2022.

Roumbanis   L. ( 2017 ) ‘ Academic Judgments under Uncertainty: A Study of Collective Anchoring Effects in Swedish Research Council Panel Groups ’, Social Studies of Science , 47 : 95 – 116 .

——— ( 2021a ) ‘ Disagreement and Agonistic Chance in Peer Review ’, Science, Technology & Human Values , 47 : 1302 – 33 .

——— ( 2021b ) ‘ The Oracles of Science: On Grant Peer Review and Competitive Funding ’, Social Science Information , 60 : 356 – 62 .

( 2019 ) ‘ Ruimte voor ieders talent (Position Paper) ’, Den Haag : VSNU, NFU, KNAW, NWO en ZonMw . < https://www.universiteitenvannederland.nl/recognitionandrewards/wp-content/uploads/2019/11/Position-paper-Ruimte-voor-ieders-talent.pdf >.

( 2013 ) San Francisco Declaration on Research Assessment . The Declaration . < https://sfdora.org > accessed 2 Jan 2022 .

Sarewitz   D. and Pielke   R. A.  Jr. ( 2007 ) ‘ The Neglected Heart of Science Policy: Reconciling Supply of and Demand for Science ’, Environmental Science & Policy , 10 : 5 – 16 .

Scholten   W. , Van Drooge   L. , and Diederen   P. ( 2018 ) Excellent Is Niet Gewoon. Dertig Jaar Focus op Excellentie in het Nederlandse Wetenschapsbeleid . The Hague : Rathenau Instituut .

Shapin   S. ( 2008 ) The Scientific Life : A Moral History of a Late Modern Vocation . Chicago : University of Chicago press .

Spaapen   J. and Van Drooge   L. ( 2011 ) ‘ Introducing “Productive Interactions” in Social Impact Assessment ’, Research Evaluation , 20 : 211 – 8 .

Travis   G. D. L. and Collins   H. M. ( 1991 ) ‘ New Light on Old Boys: Cognitive and Institutional Particularism in the Peer Review System ’, Science, Technology & Human Values , 16 : 322 – 41 .

Van Arensbergen   P. and Van den Besselaar   P. ( 2012 ) ‘ The Selection of Scientific Talent in the Allocation of Research Grants ’, Higher Education Policy , 25 : 381 – 405 .

Van Arensbergen   P. , Van der Weijden   I. , and Van den Besselaar   P. V. D. ( 2014a ) ‘ The Selection of Talent as a Group Process: A Literature Review on the Social Dynamics of Decision Making in Grant Panels ’, Research Evaluation , 23 : 298 – 311 .

—— ( 2014b ) ‘ Different Views on Scholarly Talent: What Are the Talents We Are Looking for in Science? ’, Research Evaluation , 23 : 273 – 84 .

Van den Brink , G. , Scholten , W. , and Jansen , T. , eds ( 2016 ) Goed Werk voor Academici . Culemborg : Stichting Beroepseer .

Weingart   P. ( 1999 ) ‘ Scientific Expertise and Political Accountability: Paradoxes of Science in Politics ’, Science & Public Policy , 26 : 151 – 61 .

Wessely   S. ( 1998 ) ‘ Peer Review of Grant Applications: What Do We Know? ’, The Lancet , 352 : 301 – 5 .

Supplementary data

Month: Total Views:
April 2023 723
May 2023 266
June 2023 152
July 2023 130
August 2023 355
September 2023 189
October 2023 198
November 2023 181
December 2023 153
January 2024 197
February 2024 222
March 2024 227
April 2024 218
May 2024 229
June 2024 134
July 2024 123
August 2024 59

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5430
  • Print ISSN 0302-3427
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

UWE Logo

Research Repository

All Output Person Project

Practical guidance on undertaking a service evaluation

Moule, pam; armoogum, julie; dodd, emily; donskoy, anne laure; douglass, emma; taylor, julie; turton, pat.

Julie Armoogum [email protected] Senior Lecturer in Adult Nursing

Profile Image

Dr Emily Dodd [email protected] Senior Research Fellow

Anne Laure Donskoy

Ms Emma Douglass [email protected] Senior Lecturer in Learning Disabilities

Julie Taylor [email protected] Associate Lecturer - CHSS - HSW - UHSW0001

This article describes the basic principles of evaluation, focusing on the evaluation of healthcare services. It emphasises the importance of evaluation in the current healthcare environment and the requirement for nurses to understand the essential principles of evaluation. Evaluation is defined in contrast to audit and research, and the main theoretical approaches to evaluation are outlined, providing insights into the different types of evaluation that may be undertaken. The essential features of preparing for an evaluation are considered, and guidance provided on working ethically in the NHS. It is important to involve patients and the public in evaluation activity, offering essential guidance and principles of best practice. The authors discuss the main challenges of undertaking evaluations and offer recommendations to address these, drawing on their experience as evaluators.

Journal Article Type Article
Acceptance Date Dec 17, 2015
Publication Date Jul 6, 2016
Deposit Date Jun 23, 2016
Publicly Available Date Jul 6, 2017
Journal Nursing standard (Royal College of Nursing (Great Britain) : 1987)
Print ISSN 0029-6570
Electronic ISSN 2047-9018
Publisher RCN Publishing
Peer Reviewed Peer Reviewed
Volume 30
Issue 45
Pages 46-51
DOI
Keywords evaluation, evaluation methods, healthcare evaluation, service evaluation, patient and public involvement
Public URL
Publisher URL
Additional Information Additional Information : Commissioned publication
Contract Date Jun 23, 2016

Nursing Standard article FinalDec2015v2.docx (42 Kb) Document

Licence http://www.rioxx.net/licenses/all-rights-reserved

Nursing Standard article FinalDec2015v2 (1).pdf (459 Kb) PDF

You might also like

The challenge of rescheduling nursing staff: Informing the development of a mathematical model decision tool (2014) Report

Public involvement in research: Assessing impact through a realist evaluation (2014) Journal Article

Factors affecting carers’ acceptance and use of support (2014) Presentation / Conference Contribution

Developing computer simulations for nurses and non-registered practitioners caring for men with prostate cancer (2014) Presentation / Conference Contribution

Designing and delivering an educational package to meet the needs of primary care health professionals in the diagnosis and management of those with complex regional pain syndrome (2014) Journal Article

Downloadable Citations

About UWE Bristol Research Repository

Administrator e-mail: [email protected]

This application uses the following open-source libraries:

SheetJS Community Edition

Apache License Version 2.0 ( http://www.apache.org/licenses/ )

Font Awesome

SIL OFL 1.1 ( http://scripts.sil.org/OFL )

MIT License ( http://opensource.org/licenses/mit-license.html )

CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/ )

Powered by Worktribe © 2024

Advanced Search

all of any of

Ferris State University Bulldog

Writing Your Proposal Evaluation

EVALUATION (PROVING THAT CHANGE OR IMPROVEMENT OCCURRED AND THAT YOUR PROJECT MET ITS GOALS)

Evaluations measure change or progress between conditions before you carried out your project and conditions after you carried out your project.  Decide how much change is necessary to make your project a success.  You will need to develop indicators: bench marks or standards by which to measure the success or failure of each objective in your proposal.

In the evaluation, you reaffirm the importance of your objectives and their connection to the values of the grantor.

Ask yourself these questions when you conduct an evaluation:

  • Am I answering the questions that are important to my stakeholders?
  • Am I choosing the right design (procedures and methods) for my evaluation?
  • Should I evaluate during the project, or after?
  • Did I get the information I need to complete an evaluation?
  • Are my results clear and understandable?

To plan your evaluation:

  • Identify what you are going to evaluate.   Progress or Impact?
  • Decide the methods you will use for evaluation.  Quantitative, Qualitative, or Mixed?
  • Summarize and report your findings.

Pick a design that will best answer the questions your grantor will want to know about:

Formative: tests the project while it is still going on and can be changed in mid-course. The cook tastes the soup while it is cooking.

Summative: measures the effects of the project after it is finished. The dinner guests give their opinion to the cook after eating the soup.

Show an example of your evaluation instrument (questionnaire, experiment, face to face or telephone interviews, etc. in your report.

Include at least one example of typical evaluation data (how the results of your tests for effectiveness will look).

Include a budget of your evaluation (postage, phone, FAX, travel, paper, special computer programs, etc.).

Office of Research & Sponsored Programs

Ferris Library for Information, Technology and Education (FLITE) Ferris State University 1010 Campus Drive, FLITE 410 D & F Big Rapids, MI 49307 (231) 591-2547 [email protected]

helpful professor logo

17 Research Proposal Examples

17 Research Proposal Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

research proposal example sections definition and purpose, explained below

A research proposal systematically and transparently outlines a proposed research project.

The purpose of a research proposal is to demonstrate a project’s viability and the researcher’s preparedness to conduct an academic study. It serves as a roadmap for the researcher.

The process holds value both externally (for accountability purposes and often as a requirement for a grant application) and intrinsic value (for helping the researcher to clarify the mechanics, purpose, and potential signficance of the study).

Key sections of a research proposal include: the title, abstract, introduction, literature review, research design and methods, timeline, budget, outcomes and implications, references, and appendix. Each is briefly explained below.

Watch my Guide: How to Write a Research Proposal

Get your Template for Writing your Research Proposal Here (With AI Prompts!)

Research Proposal Sample Structure

Title: The title should present a concise and descriptive statement that clearly conveys the core idea of the research projects. Make it as specific as possible. The reader should immediately be able to grasp the core idea of the intended research project. Often, the title is left too vague and does not help give an understanding of what exactly the study looks at.

Abstract: Abstracts are usually around 250-300 words and provide an overview of what is to follow – including the research problem , objectives, methods, expected outcomes, and significance of the study. Use it as a roadmap and ensure that, if the abstract is the only thing someone reads, they’ll get a good fly-by of what will be discussed in the peice.

Introduction: Introductions are all about contextualization. They often set the background information with a statement of the problem. At the end of the introduction, the reader should understand what the rationale for the study truly is. I like to see the research questions or hypotheses included in the introduction and I like to get a good understanding of what the significance of the research will be. It’s often easiest to write the introduction last

Literature Review: The literature review dives deep into the existing literature on the topic, demosntrating your thorough understanding of the existing literature including themes, strengths, weaknesses, and gaps in the literature. It serves both to demonstrate your knowledge of the field and, to demonstrate how the proposed study will fit alongside the literature on the topic. A good literature review concludes by clearly demonstrating how your research will contribute something new and innovative to the conversation in the literature.

Research Design and Methods: This section needs to clearly demonstrate how the data will be gathered and analyzed in a systematic and academically sound manner. Here, you need to demonstrate that the conclusions of your research will be both valid and reliable. Common points discussed in the research design and methods section include highlighting the research paradigm, methodologies, intended population or sample to be studied, data collection techniques, and data analysis procedures . Toward the end of this section, you are encouraged to also address ethical considerations and limitations of the research process , but also to explain why you chose your research design and how you are mitigating the identified risks and limitations.

Timeline: Provide an outline of the anticipated timeline for the study. Break it down into its various stages (including data collection, data analysis, and report writing). The goal of this section is firstly to establish a reasonable breakdown of steps for you to follow and secondly to demonstrate to the assessors that your project is practicable and feasible.

Budget: Estimate the costs associated with the research project and include evidence for your estimations. Typical costs include staffing costs, equipment, travel, and data collection tools. When applying for a scholarship, the budget should demonstrate that you are being responsible with your expensive and that your funding application is reasonable.

Expected Outcomes and Implications: A discussion of the anticipated findings or results of the research, as well as the potential contributions to the existing knowledge, theory, or practice in the field. This section should also address the potential impact of the research on relevant stakeholders and any broader implications for policy or practice.

References: A complete list of all the sources cited in the research proposal, formatted according to the required citation style. This demonstrates the researcher’s familiarity with the relevant literature and ensures proper attribution of ideas and information.

Appendices (if applicable): Any additional materials, such as questionnaires, interview guides, or consent forms, that provide further information or support for the research proposal. These materials should be included as appendices at the end of the document.

Research Proposal Examples

Research proposals often extend anywhere between 2,000 and 15,000 words in length. The following snippets are samples designed to briefly demonstrate what might be discussed in each section.

1. Education Studies Research Proposals

See some real sample pieces:

  • Assessment of the perceptions of teachers towards a new grading system
  • Does ICT use in secondary classrooms help or hinder student learning?
  • Digital technologies in focus project
  • Urban Middle School Teachers’ Experiences of the Implementation of
  • Restorative Justice Practices
  • Experiences of students of color in service learning

Consider this hypothetical education research proposal:

The Impact of Game-Based Learning on Student Engagement and Academic Performance in Middle School Mathematics

Abstract: The proposed study will explore multiplayer game-based learning techniques in middle school mathematics curricula and their effects on student engagement. The study aims to contribute to the current literature on game-based learning by examining the effects of multiplayer gaming in learning.

Introduction: Digital game-based learning has long been shunned within mathematics education for fears that it may distract students or lower the academic integrity of the classrooms. However, there is emerging evidence that digital games in math have emerging benefits not only for engagement but also academic skill development. Contributing to this discourse, this study seeks to explore the potential benefits of multiplayer digital game-based learning by examining its impact on middle school students’ engagement and academic performance in a mathematics class.

Literature Review: The literature review has identified gaps in the current knowledge, namely, while game-based learning has been extensively explored, the role of multiplayer games in supporting learning has not been studied.

Research Design and Methods: This study will employ a mixed-methods research design based upon action research in the classroom. A quasi-experimental pre-test/post-test control group design will first be used to compare the academic performance and engagement of middle school students exposed to game-based learning techniques with those in a control group receiving instruction without the aid of technology. Students will also be observed and interviewed in regard to the effect of communication and collaboration during gameplay on their learning.

Timeline: The study will take place across the second term of the school year with a pre-test taking place on the first day of the term and the post-test taking place on Wednesday in Week 10.

Budget: The key budgetary requirements will be the technologies required, including the subscription cost for the identified games and computers.

Expected Outcomes and Implications: It is expected that the findings will contribute to the current literature on game-based learning and inform educational practices, providing educators and policymakers with insights into how to better support student achievement in mathematics.

2. Psychology Research Proposals

See some real examples:

  • A situational analysis of shared leadership in a self-managing team
  • The effect of musical preference on running performance
  • Relationship between self-esteem and disordered eating amongst adolescent females

Consider this hypothetical psychology research proposal:

The Effects of Mindfulness-Based Interventions on Stress Reduction in College Students

Abstract: This research proposal examines the impact of mindfulness-based interventions on stress reduction among college students, using a pre-test/post-test experimental design with both quantitative and qualitative data collection methods .

Introduction: College students face heightened stress levels during exam weeks. This can affect both mental health and test performance. This study explores the potential benefits of mindfulness-based interventions such as meditation as a way to mediate stress levels in the weeks leading up to exam time.

Literature Review: Existing research on mindfulness-based meditation has shown the ability for mindfulness to increase metacognition, decrease anxiety levels, and decrease stress. Existing literature has looked at workplace, high school and general college-level applications. This study will contribute to the corpus of literature by exploring the effects of mindfulness directly in the context of exam weeks.

Research Design and Methods: Participants ( n= 234 ) will be randomly assigned to either an experimental group, receiving 5 days per week of 10-minute mindfulness-based interventions, or a control group, receiving no intervention. Data will be collected through self-report questionnaires, measuring stress levels, semi-structured interviews exploring participants’ experiences, and students’ test scores.

Timeline: The study will begin three weeks before the students’ exam week and conclude after each student’s final exam. Data collection will occur at the beginning (pre-test of self-reported stress levels) and end (post-test) of the three weeks.

Expected Outcomes and Implications: The study aims to provide evidence supporting the effectiveness of mindfulness-based interventions in reducing stress among college students in the lead up to exams, with potential implications for mental health support and stress management programs on college campuses.

3. Sociology Research Proposals

  • Understanding emerging social movements: A case study of ‘Jersey in Transition’
  • The interaction of health, education and employment in Western China
  • Can we preserve lower-income affordable neighbourhoods in the face of rising costs?

Consider this hypothetical sociology research proposal:

The Impact of Social Media Usage on Interpersonal Relationships among Young Adults

Abstract: This research proposal investigates the effects of social media usage on interpersonal relationships among young adults, using a longitudinal mixed-methods approach with ongoing semi-structured interviews to collect qualitative data.

Introduction: Social media platforms have become a key medium for the development of interpersonal relationships, particularly for young adults. This study examines the potential positive and negative effects of social media usage on young adults’ relationships and development over time.

Literature Review: A preliminary review of relevant literature has demonstrated that social media usage is central to development of a personal identity and relationships with others with similar subcultural interests. However, it has also been accompanied by data on mental health deline and deteriorating off-screen relationships. The literature is to-date lacking important longitudinal data on these topics.

Research Design and Methods: Participants ( n = 454 ) will be young adults aged 18-24. Ongoing self-report surveys will assess participants’ social media usage, relationship satisfaction, and communication patterns. A subset of participants will be selected for longitudinal in-depth interviews starting at age 18 and continuing for 5 years.

Timeline: The study will be conducted over a period of five years, including recruitment, data collection, analysis, and report writing.

Expected Outcomes and Implications: This study aims to provide insights into the complex relationship between social media usage and interpersonal relationships among young adults, potentially informing social policies and mental health support related to social media use.

4. Nursing Research Proposals

  • Does Orthopaedic Pre-assessment clinic prepare the patient for admission to hospital?
  • Nurses’ perceptions and experiences of providing psychological care to burns patients
  • Registered psychiatric nurse’s practice with mentally ill parents and their children

Consider this hypothetical nursing research proposal:

The Influence of Nurse-Patient Communication on Patient Satisfaction and Health Outcomes following Emergency Cesarians

Abstract: This research will examines the impact of effective nurse-patient communication on patient satisfaction and health outcomes for women following c-sections, utilizing a mixed-methods approach with patient surveys and semi-structured interviews.

Introduction: It has long been known that effective communication between nurses and patients is crucial for quality care. However, additional complications arise following emergency c-sections due to the interaction between new mother’s changing roles and recovery from surgery.

Literature Review: A review of the literature demonstrates the importance of nurse-patient communication, its impact on patient satisfaction, and potential links to health outcomes. However, communication between nurses and new mothers is less examined, and the specific experiences of those who have given birth via emergency c-section are to date unexamined.

Research Design and Methods: Participants will be patients in a hospital setting who have recently had an emergency c-section. A self-report survey will assess their satisfaction with nurse-patient communication and perceived health outcomes. A subset of participants will be selected for in-depth interviews to explore their experiences and perceptions of the communication with their nurses.

Timeline: The study will be conducted over a period of six months, including rolling recruitment, data collection, analysis, and report writing within the hospital.

Expected Outcomes and Implications: This study aims to provide evidence for the significance of nurse-patient communication in supporting new mothers who have had an emergency c-section. Recommendations will be presented for supporting nurses and midwives in improving outcomes for new mothers who had complications during birth.

5. Social Work Research Proposals

  • Experiences of negotiating employment and caring responsibilities of fathers post-divorce
  • Exploring kinship care in the north region of British Columbia

Consider this hypothetical social work research proposal:

The Role of a Family-Centered Intervention in Preventing Homelessness Among At-Risk Youthin a working-class town in Northern England

Abstract: This research proposal investigates the effectiveness of a family-centered intervention provided by a local council area in preventing homelessness among at-risk youth. This case study will use a mixed-methods approach with program evaluation data and semi-structured interviews to collect quantitative and qualitative data .

Introduction: Homelessness among youth remains a significant social issue. This study aims to assess the effectiveness of family-centered interventions in addressing this problem and identify factors that contribute to successful prevention strategies.

Literature Review: A review of the literature has demonstrated several key factors contributing to youth homelessness including lack of parental support, lack of social support, and low levels of family involvement. It also demonstrates the important role of family-centered interventions in addressing this issue. Drawing on current evidence, this study explores the effectiveness of one such intervention in preventing homelessness among at-risk youth in a working-class town in Northern England.

Research Design and Methods: The study will evaluate a new family-centered intervention program targeting at-risk youth and their families. Quantitative data on program outcomes, including housing stability and family functioning, will be collected through program records and evaluation reports. Semi-structured interviews with program staff, participants, and relevant stakeholders will provide qualitative insights into the factors contributing to program success or failure.

Timeline: The study will be conducted over a period of six months, including recruitment, data collection, analysis, and report writing.

Budget: Expenses include access to program evaluation data, interview materials, data analysis software, and any related travel costs for in-person interviews.

Expected Outcomes and Implications: This study aims to provide evidence for the effectiveness of family-centered interventions in preventing youth homelessness, potentially informing the expansion of or necessary changes to social work practices in Northern England.

Research Proposal Template

Get your Detailed Template for Writing your Research Proposal Here (With AI Prompts!)

This is a template for a 2500-word research proposal. You may find it difficult to squeeze everything into this wordcount, but it’s a common wordcount for Honors and MA-level dissertations.

SectionChecklist
Title – Ensure the single-sentence title clearly states the study’s focus
Abstract (Words: 200) – Briefly describe the research topicSummarize the research problem or question
– Outline the research design and methods
– Mention the expected outcomes and implications
Introduction (Words: 300) – Introduce the research topic and its significance
– Clearly state the research problem or question
– Explain the purpose and objectives of the study
– Provide a brief overview of
Literature Review (Words: 800) – Gather the existing literature into themes and ket ideas
– the themes and key ideas in the literature
– Identify gaps or inconsistencies in the literature
– Explain how the current study will contribute to the literature
Research Design and Methods (Words; 800) – Describe the research paradigm (generally: positivism and interpretivism)
– Describe the research design (e.g., qualitative, quantitative, or mixed-methods)
– Explain the data collection methods (e.g., surveys, interviews, observations)
– Detail the sampling strategy and target population
– Outline the data analysis techniques (e.g., statistical analysis, thematic analysis)
– Outline your validity and reliability procedures
– Outline your intended ethics procedures
– Explain the study design’s limitations and justify your decisions
Timeline (Single page table) – Provide an overview of the research timeline
– Break down the study into stages with specific timeframes (e.g., data collection, analysis, report writing)
– Include any relevant deadlines or milestones
Budget (200 words) – Estimate the costs associated with the research project
– Detail specific expenses (e.g., materials, participant incentives, travel costs)
– Include any necessary justifications for the budget items
– Mention any funding sources or grant applications
Expected Outcomes and Implications (200 words) – Summarize the anticipated findings or results of the study
– Discuss the potential implications of the findings for theory, practice, or policy
– Describe any possible limitations of the study

Your research proposal is where you really get going with your study. I’d strongly recommend working closely with your teacher in developing a research proposal that’s consistent with the requirements and culture of your institution, as in my experience it varies considerably. The above template is from my own courses that walk students through research proposals in a British School of Education.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Number Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Word Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Outdoor Games for Kids
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 50 Incentives to Give to Students

8 thoughts on “17 Research Proposal Examples”

' src=

Very excellent research proposals

' src=

very helpful

' src=

Very helpful

' src=

Dear Sir, I need some help to write an educational research proposal. Thank you.

' src=

Hi Levi, use the site search bar to ask a question and I’ll likely have a guide already written for your specific question. Thanks for reading!

' src=

very good research proposal

' src=

Thank you so much sir! ❤️

' src=

Very helpful 👌

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

This website has now been archived. Please visit us at

research proposal service evaluation

Research, service evaluation or audit?

Is my project research.

Not all projects are classed as research and they can also fall under “service evaluation” or “clinical audit”.

A well-used distinction is the following:

Research is designed and conducted to generate new knowledge.

Service evaluations are designed to answer the question “What standard does this service achieve?”.

Audits are designed to find out whether the quality of a service meets a defined standard.

The HRA has devised a decision tool to help you assess whether your project is considered research: Is my study research? .

It can sometimes be difficult to decide whether a project is research, service evaluation or audit and you can find further guidance to help you decide in the Defining Research Table .

NHS Research Ethics Committee and HRA/HCRW Approval are only required for research projects , however local Trust / Organisations will need to agree to participate in any audit or service evaluation and may have separate processes for this. Please note that even if your project is not classed as research you still need to carefully consider any ethical issues that could arise.

If you remain unsure and you are running your project within primary or community care in Norfolk and Suffolk please submit a draft proposal to the research office at [email protected] . We will review your proposal and can provide you with advice.

Service evaluation

The Research Office runs training on how to design and conduct service evaluations . We can also advise on the design of your local evaluation. Please contact [email protected] for more information.

research proposal service evaluation

How to Write a Research Proposal: (with Examples & Templates)

how to write a research proposal

Table of Contents

Before conducting a study, a research proposal should be created that outlines researchers’ plans and methodology and is submitted to the concerned evaluating organization or person. Creating a research proposal is an important step to ensure that researchers are on track and are moving forward as intended. A research proposal can be defined as a detailed plan or blueprint for the proposed research that you intend to undertake. It provides readers with a snapshot of your project by describing what you will investigate, why it is needed, and how you will conduct the research.  

Your research proposal should aim to explain to the readers why your research is relevant and original, that you understand the context and current scenario in the field, have the appropriate resources to conduct the research, and that the research is feasible given the usual constraints.  

This article will describe in detail the purpose and typical structure of a research proposal , along with examples and templates to help you ace this step in your research journey.  

What is a Research Proposal ?  

A research proposal¹ ,²  can be defined as a formal report that describes your proposed research, its objectives, methodology, implications, and other important details. Research proposals are the framework of your research and are used to obtain approvals or grants to conduct the study from various committees or organizations. Consequently, research proposals should convince readers of your study’s credibility, accuracy, achievability, practicality, and reproducibility.   

With research proposals , researchers usually aim to persuade the readers, funding agencies, educational institutions, and supervisors to approve the proposal. To achieve this, the report should be well structured with the objectives written in clear, understandable language devoid of jargon. A well-organized research proposal conveys to the readers or evaluators that the writer has thought out the research plan meticulously and has the resources to ensure timely completion.  

Purpose of Research Proposals  

A research proposal is a sales pitch and therefore should be detailed enough to convince your readers, who could be supervisors, ethics committees, universities, etc., that what you’re proposing has merit and is feasible . Research proposals can help students discuss their dissertation with their faculty or fulfill course requirements and also help researchers obtain funding. A well-structured proposal instills confidence among readers about your ability to conduct and complete the study as proposed.  

Research proposals can be written for several reasons:³  

  • To describe the importance of research in the specific topic  
  • Address any potential challenges you may encounter  
  • Showcase knowledge in the field and your ability to conduct a study  
  • Apply for a role at a research institute  
  • Convince a research supervisor or university that your research can satisfy the requirements of a degree program  
  • Highlight the importance of your research to organizations that may sponsor your project  
  • Identify implications of your project and how it can benefit the audience  

What Goes in a Research Proposal?    

Research proposals should aim to answer the three basic questions—what, why, and how.  

The What question should be answered by describing the specific subject being researched. It should typically include the objectives, the cohort details, and the location or setting.  

The Why question should be answered by describing the existing scenario of the subject, listing unanswered questions, identifying gaps in the existing research, and describing how your study can address these gaps, along with the implications and significance.  

The How question should be answered by describing the proposed research methodology, data analysis tools expected to be used, and other details to describe your proposed methodology.   

Research Proposal Example  

Here is a research proposal sample template (with examples) from the University of Rochester Medical Center. 4 The sections in all research proposals are essentially the same although different terminology and other specific sections may be used depending on the subject.  

Research Proposal Template

Structure of a Research Proposal  

If you want to know how to make a research proposal impactful, include the following components:¹  

1. Introduction  

This section provides a background of the study, including the research topic, what is already known about it and the gaps, and the significance of the proposed research.  

2. Literature review  

This section contains descriptions of all the previous relevant studies pertaining to the research topic. Every study cited should be described in a few sentences, starting with the general studies to the more specific ones. This section builds on the understanding gained by readers in the Introduction section and supports it by citing relevant prior literature, indicating to readers that you have thoroughly researched your subject.  

3. Objectives  

Once the background and gaps in the research topic have been established, authors must now state the aims of the research clearly. Hypotheses should be mentioned here. This section further helps readers understand what your study’s specific goals are.  

4. Research design and methodology  

Here, authors should clearly describe the methods they intend to use to achieve their proposed objectives. Important components of this section include the population and sample size, data collection and analysis methods and duration, statistical analysis software, measures to avoid bias (randomization, blinding), etc.  

5. Ethical considerations  

This refers to the protection of participants’ rights, such as the right to privacy, right to confidentiality, etc. Researchers need to obtain informed consent and institutional review approval by the required authorities and mention this clearly for transparency.  

6. Budget/funding  

Researchers should prepare their budget and include all expected expenditures. An additional allowance for contingencies such as delays should also be factored in.  

7. Appendices  

This section typically includes information that supports the research proposal and may include informed consent forms, questionnaires, participant information, measurement tools, etc.  

8. Citations  

research proposal service evaluation

Important Tips for Writing a Research Proposal  

Writing a research proposal begins much before the actual task of writing. Planning the research proposal structure and content is an important stage, which if done efficiently, can help you seamlessly transition into the writing stage. 3,5  

The Planning Stage  

  • Manage your time efficiently. Plan to have the draft version ready at least two weeks before your deadline and the final version at least two to three days before the deadline.
  • What is the primary objective of your research?  
  • Will your research address any existing gap?  
  • What is the impact of your proposed research?  
  • Do people outside your field find your research applicable in other areas?  
  • If your research is unsuccessful, would there still be other useful research outcomes?  

  The Writing Stage  

  • Create an outline with main section headings that are typically used.  
  • Focus only on writing and getting your points across without worrying about the format of the research proposal , grammar, punctuation, etc. These can be fixed during the subsequent passes. Add details to each section heading you created in the beginning.   
  • Ensure your sentences are concise and use plain language. A research proposal usually contains about 2,000 to 4,000 words or four to seven pages.  
  • Don’t use too many technical terms and abbreviations assuming that the readers would know them. Define the abbreviations and technical terms.  
  • Ensure that the entire content is readable. Avoid using long paragraphs because they affect the continuity in reading. Break them into shorter paragraphs and introduce some white space for readability.  
  • Focus on only the major research issues and cite sources accordingly. Don’t include generic information or their sources in the literature review.  
  • Proofread your final document to ensure there are no grammatical errors so readers can enjoy a seamless, uninterrupted read.  
  • Use academic, scholarly language because it brings formality into a document.  
  • Ensure that your title is created using the keywords in the document and is neither too long and specific nor too short and general.  
  • Cite all sources appropriately to avoid plagiarism.  
  • Make sure that you follow guidelines, if provided. This includes rules as simple as using a specific font or a hyphen or en dash between numerical ranges.  
  • Ensure that you’ve answered all questions requested by the evaluating authority.  

Key Takeaways   

Here’s a summary of the main points about research proposals discussed in the previous sections:  

  • A research proposal is a document that outlines the details of a proposed study and is created by researchers to submit to evaluators who could be research institutions, universities, faculty, etc.  
  • Research proposals are usually about 2,000-4,000 words long, but this depends on the evaluating authority’s guidelines.  
  • A good research proposal ensures that you’ve done your background research and assessed the feasibility of the research.  
  • Research proposals have the following main sections—introduction, literature review, objectives, methodology, ethical considerations, and budget.  

research proposal service evaluation

Frequently Asked Questions  

Q1. How is a research proposal evaluated?  

A1. In general, most evaluators, including universities, broadly use the following criteria to evaluate research proposals . 6  

  • Significance —Does the research address any important subject or issue, which may or may not be specific to the evaluator or university?  
  • Content and design —Is the proposed methodology appropriate to answer the research question? Are the objectives clear and well aligned with the proposed methodology?  
  • Sample size and selection —Is the target population or cohort size clearly mentioned? Is the sampling process used to select participants randomized, appropriate, and free of bias?  
  • Timing —Are the proposed data collection dates mentioned clearly? Is the project feasible given the specified resources and timeline?  
  • Data management and dissemination —Who will have access to the data? What is the plan for data analysis?  

Q2. What is the difference between the Introduction and Literature Review sections in a research proposal ?  

A2. The Introduction or Background section in a research proposal sets the context of the study by describing the current scenario of the subject and identifying the gaps and need for the research. A Literature Review, on the other hand, provides references to all prior relevant literature to help corroborate the gaps identified and the research need.  

Q3. How long should a research proposal be?  

A3. Research proposal lengths vary with the evaluating authority like universities or committees and also the subject. Here’s a table that lists the typical research proposal lengths for a few universities.  

     
  Arts programs  1,000-1,500 
University of Birmingham  Law School programs  2,500 
  PhD  2,500 
    2,000 
  Research degrees  2,000-3,500 

Q4. What are the common mistakes to avoid in a research proposal ?  

A4. Here are a few common mistakes that you must avoid while writing a research proposal . 7  

  • No clear objectives: Objectives should be clear, specific, and measurable for the easy understanding among readers.  
  • Incomplete or unconvincing background research: Background research usually includes a review of the current scenario of the particular industry and also a review of the previous literature on the subject. This helps readers understand your reasons for undertaking this research because you identified gaps in the existing research.  
  • Overlooking project feasibility: The project scope and estimates should be realistic considering the resources and time available.   
  • Neglecting the impact and significance of the study: In a research proposal , readers and evaluators look for the implications or significance of your research and how it contributes to the existing research. This information should always be included.  
  • Unstructured format of a research proposal : A well-structured document gives confidence to evaluators that you have read the guidelines carefully and are well organized in your approach, consequently affirming that you will be able to undertake the research as mentioned in your proposal.  
  • Ineffective writing style: The language used should be formal and grammatically correct. If required, editors could be consulted, including AI-based tools such as Paperpal , to refine the research proposal structure and language.  

Thus, a research proposal is an essential document that can help you promote your research and secure funds and grants for conducting your research. Consequently, it should be well written in clear language and include all essential details to convince the evaluators of your ability to conduct the research as proposed.  

This article has described all the important components of a research proposal and has also provided tips to improve your writing style. We hope all these tips will help you write a well-structured research proposal to ensure receipt of grants or any other purpose.  

References  

  • Sudheesh K, Duggappa DR, Nethra SS. How to write a research proposal? Indian J Anaesth. 2016;60(9):631-634. Accessed July 15, 2024. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5037942/  
  • Writing research proposals. Harvard College Office of Undergraduate Research and Fellowships. Harvard University. Accessed July 14, 2024. https://uraf.harvard.edu/apply-opportunities/app-components/essays/research-proposals  
  • What is a research proposal? Plus how to write one. Indeed website. Accessed July 17, 2024. https://www.indeed.com/career-advice/career-development/research-proposal  
  • Research proposal template. University of Rochester Medical Center. Accessed July 16, 2024. https://www.urmc.rochester.edu/MediaLibraries/URMCMedia/pediatrics/research/documents/Research-proposal-Template.pdf  
  • Tips for successful proposal writing. Johns Hopkins University. Accessed July 17, 2024. https://research.jhu.edu/wp-content/uploads/2018/09/Tips-for-Successful-Proposal-Writing.pdf  
  • Formal review of research proposals. Cornell University. Accessed July 18, 2024. https://irp.dpb.cornell.edu/surveys/survey-assessment-review-group/research-proposals  
  • 7 Mistakes you must avoid in your research proposal. Aveksana (via LinkedIn). Accessed July 17, 2024. https://www.linkedin.com/pulse/7-mistakes-you-must-avoid-your-research-proposal-aveksana-cmtwf/  

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

How to write a phd research proposal.

  • What are the Benefits of Generative AI for Academic Writing?
  • How to Avoid Plagiarism When Using Generative AI Tools
  • What is Hedging in Academic Writing?  

How to Write Your Research Paper in APA Format

The future of academia: how ai tools are changing the way we do research, you may also like, dissertation printing and binding | types & comparison , what is a dissertation preface definition and examples , how to write your research paper in apa..., how to choose a dissertation topic, how to write an academic paragraph (step-by-step guide), maintaining academic integrity with paperpal’s generative ai writing..., research funding basics: what should a grant proposal..., how to write an abstract in research papers..., how to write dissertation acknowledgements.

This paper is in the following e-collection/theme issue:

Published on 7.8.2024 in Vol 13 (2024)

Development and Evaluation of a Web-Based Platform for Personalized Educational and Professional Assistance for Dementia Caregivers: Proposal for a Mixed Methods Study

Authors of this article:

Author Orcid Image

  • Logan DuBose 1, 2 , MBA, MD   ; 
  • Qiping Fan 3 , MS, DrPH   ; 
  • Louis Fisher 3   ; 
  • Minh-Nguyet Hoang 4 , MBA   ; 
  • Diana Salha 1   ; 
  • Shinduk Lee 5 , MSPH, DrPH   ; 
  • Marcia G Ory 1 , MPH, PhD   ; 
  • Tokunbo Falohun 2, 6 , MS  

1 School of Public Health, Texas A&M University, College Station, TX, United States

2 Olera lnc, Houston, TX, United States

3 Department of Public Health Sciences, Clemson University, Clemson, SC, United States

4 School of Medicine, Texas A&M University, College Station, TX, United States

5 College of Nursing, University of Utah, Salt Lake City, UT, United States

6 Department of Biomedical Engineering, Texas A&M University, College Station, TX, United States

Corresponding Author:

Qiping Fan, MS, DrPH

Department of Public Health Sciences

Clemson University

201 Epsilon Zeta Drive

Clemson, SC, 29634

United States

Phone: 1 864 656 3841

Email: [email protected]

Background: Alzheimer disease (AD) and AD-related dementia are prevalent concerns for aging populations. With a growing older adult population living in the United States, the number of people living with dementia is expected to grow, posing significant challenges for informal caregivers. The mental and physical burdens associated with caregiving highlight the importance of developing novel and effective resources to support caregivers. However, technology solutions designed to address their needs often face low adoption rates due to usability issues and a lack of contextual relevance. This study focuses on developing a web-based platform providing financial and legal planning information and education for dementia caregivers and evaluating the platform’s usability and adoptability.

Objective: The goal of this project is to create a web-based platform that connects caregivers with personalized and easily accessible resources. This project involves industrial, academic, and community partners and focuses on two primary aims: (1) developing a digital platform using a Dementia Care Personalization Algorithm and assessing feasibility in a pilot group of caregivers, and (2) evaluating the acceptability and usability of the digital platform across different racial or ethnic populations. This work will aid in the development of technology-based interventions to reduce caregiver burden.

Methods: The phase I study follows an iterative Design Thinking approach, involving at least 25 dementia caregivers as a user feedback panel to assess the platform’s functionality, aesthetics, information, and overall quality using the adapted Mobile Application Rating Scale. Phase II is a usability study with 300 dementia caregivers in Texas (100 African American, 100 Hispanic or Latinx, and 100 non-Hispanic White). Participants will use the digital platform for about 4 weeks and evaluate its usefulness and ease of use through the Technology Acceptance Survey.

Results: The study received funding from the National Institute on Aging on September 3, 2021. Ethical approval for phase I was obtained from the Texas A&M University Institutional Review Board on December 8, 2021, with data collection starting on January 1, 2022, and concluding on May 31, 2022. Phase I results were published on September 5, 2023, and April 17, 2024, respectively. On June 21, 2023, ethical approval for human subjects for phase II was granted, and participant recruitment began on July 1, 2023.

Conclusions: Upon completing these aims, we expect to deliver a widely accessible digital platform tailored to assist dementia caregivers with financial and legal challenges by connecting them to personalized, contextually relevant information and resources in Texas. If successful, we plan to work with caregiving organizations to scale and sustain the platform, addressing the needs of the growing population living with dementia.

International Registered Report Identifier (IRRID): DERR1-10.2196/64127

Introduction

Currently, 6.7 million Americans live with Alzheimer disease (AD) and AD-related dementia, a progressive and debilitating neurocognitive disease that leads to loss of memory, motor function, and other psychological symptoms [ 1 ]. With the growing aging population in the United States, the number of people living with dementia is projected to grow exponentially to 13.8 million by 2060 [ 1 ]. These trends indicate a growing public health crisis that requires multifaceted interventions. People living with dementia are primarily cared for by informal caregivers, typically spouses and adult children [ 2 , 3 ], with as many as 16 million Americans currently serving in this role [ 4 ]. These informal caregivers of people living with dementia often have a higher risk of developing depression, anxiety, social isolation, and physical problems due to the chronic stress and diverse burdens associated with caregiving [ 5 , 6 ]. In recent years, dementia caregivers are estimated to provide more than 18 billion hours annually with an estimated economic value of US $346 billion for their care services [ 1 ], and this informal care cost is projected to increase to US $2.2 trillion in 2060 [ 7 ].

A large part of the informal caregiver burden is related to various responsibilities in caregiving for people living with dementia, including management of the care recipient’s financial, legal, and estate-related challenges [ 6 , 8 ]. Without adequate support, caregivers often struggle to find or use proper resources, making even simple tasks burdensome. Furthermore, these primary family caregivers often must balance the challenges of caregiving with other personal responsibilities, such as employment and family obligations [ 8 , 9 ]. Considering the significant financial burden associated with caregiving, role strain is further exacerbated when caregiving requires a full-time commitment. Six out of 10 family caregivers of people with dementia have reported significant work impacts ranging from reduction of work hours to early retirement due to their caregiving obligations, which disrupts their wages and employment and depletes their savings [ 6 , 10 ]. The varied progressive nature of dementia further compounds the complexity of care for people with dementia, increasing the psychological, physical, and financial factors of the caregiving burden [ 1 , 11 ].

The variation in caregiver burdens related to sociodemographic characteristics contributes to greater complications. Burdens are often exacerbated in ethnic and minority populations, who generally have higher rates of dementia but lower access to caregiving services and information [ 12 , 13 ]. In addition, ethnic minorities can experience complications when communicating with care professionals due to cultural or linguistic barriers [ 13 ]. These barriers can disproportionately affect nonnative English speakers, increasing the informal caregiver burden due to the greater experienced inaccessibility in support systems within the American health care system. Unique stressors have also been demonstrated in gender or sexual minority caregivers, further emphasizing the diversity of caregiving experiences and challenges within the United States [ 14 ]. Literature has shown that considering unique barriers or challenges in diverse demographics when developing support interventions increases their effectiveness in reducing caregiver burden [ 15 ].

Technological Interventions and Artificial Intelligence–Driven Digital Health

There are many professional caregiver assistance organizations that provide services to informal caregivers, including aging care law, financial services, respite care, in-home care, older adults living, and medical services [ 16 ]. Digital interventions present a promising tool for assisting caregivers in identifying relevant professional caregiver assistance organizations services. A myriad of digital interventions have been developed to aid caregivers of people with dementia in identifying these services, ranging from web-based training, educational forums, caregiving support groups, and videoconferencing [ 17 - 19 ]. However, the adoption rate of these technologies remains low outside of pilot studies. This low adoption rate can be attributed to factors such as poor usability and accessibility, information complexity, funding limitations, and lack of contextualized relevance [ 8 , 20 - 24 ]. This issue is especially pronounced among minority populations, such as African American and Hispanic or Latinx caregivers, where cultural differences and caregiving challenges are often not considered in the development and evaluation of technology-based interventions [ 25 ].

Factors influencing the implementation and adoption of technology-based interventions by informal caregivers include the expected or perceived value by users, the features of the technology, the characteristics of the caregivers, and the condition of dementia [ 26 ]. According to the Technology Acceptance Model (TAM), perceived usefulness and ease of use are the 2 most important factors in determining how likely users are to adopt or reject new technologies [ 27 ]. The features of developed technologies, such as physical appearance, simplicity, and usability, are crucial for caregiver adoption [ 26 ]. In addition, the continued adaptation of technology over time, as dementia progresses and as caregiving needs evolve, is noted as important [ 26 ]. Family caregivers come from various racial and ethnic groups, generations, occupations, financial situations, and educational and cultural backgrounds [ 6 ], and their characteristics and specialized care needs are widely mentioned as influencing factors for technology adoption [ 26 ].

Artificial intelligence (AI) can offer various benefits to developing interventions that help caregivers better understand and use information in real time, leading to a more personalized and effective decision-making process [ 28 ]. AI has the potential to improve perceived usefulness and ease of use by personalizing and simplifying complex information. For instance, AI-driven applications can often offer user-friendly interfaces and navigation, reducing the learning curve for caregivers who may not be technologically savvy and enhancing user experience. Moreover, AI can be used to create and deliver personalized dementia care suggestions, meeting the evolving needs of people with dementia and their caregivers [ 29 ].

Development of a Caregiving Support Platform

While there is increasing interest in technology interventions for dementia caregiving, limited literature has explored their effectiveness outside of research settings. Early involvement of caregivers, as emphasized in community participatory research and dissemination and implementation science, can enhance intervention adoptability and sustainability [ 18 , 30 - 32 ]. Our study seeks to address the paucity of research by exploring the efficacy of delivering caregiving resources and services in a technology-driven setting that considers individualized and cultural issues, in order to provide personalized, relevant, timely interventions to caregivers of people living with dementia.

To address this gap, our team developed a website-based platform designed to provide personalized caregiver support, including identification of caregiving resources related to older adults living arrangements, financial services, and legal services, as well as education related to dementia management and caregiving. In our web-based care planning tool, the Olera.care platform, we use our Dementia Care Personalization Algorithm (DCPA) based on logic decision trees and geolocation to provide caregivers of people living with dementia with a tailored guide on the aging care, older adults living, financial, and legal aspects of dementia caregiving (Olera.care). Studies have shown that such caregiving assistance is particularly effective in reducing caregiver burden [ 8 ]. To provide a technological intervention that would reduce the caregiving burden and address the needs of caregivers of people living with dementia, we developed a study with 2 phases centered in a community-engaged approach [ 33 , 34 ].

In further development cycles, various AI elements will be incorporated into our digital platform to take advantage of the rapidly advancing technology to increase acceptance and adoption. These AI elements include large language models (LLMs), a novel class of AI, and personalized care planning agents specialized in social assistance functions and resource connection. LLMs are a type of AI program that can recognize and interpret vast amounts of human language, texts, and data. Fine-tuning of LLMs through aging care domain-specific training can vastly improve algorithm accuracy and provide the most up-to-date information that is relevant for each user, enabling caregivers to navigate and use preexisting resources and information databases more effectively. The LLM’s capacity is particularly valuable in navigating the complexity of dementia caregiving information, especially for individuals unfamiliar with financial and legal terminology, those unfamiliar with terminology related to aging services, or those who are uncertain about their specific needs. In addition, personalized care planning agents, capable of perceiving information and making decisions, can be integrated to guide users through various resources and instructions for navigating websites, significantly improving the user interface and user experience. These agents can be optimized using machine learning, specifically through reinforcement learning from human feedback that leverages domain expertise to train the AI to output more contextually relevant responses.

Evaluation Metrics and Framework

There are several metrics to evaluate the usability, ease of use, and acceptance of health care technological interventions, including the Mobile Application Rating Scale (MARS) and the TAM [ 35 - 37 ]. The MARS evaluates a digital product based on the functionality, design, information quality, engagement, and subjective quality of digital applications using a 5-point Likert scale survey [ 35 ]. The TAM evaluates the acceptability of a technology primarily based on its perceived usefulness and perceived ease of use among users [ 36 , 37 ]. Perceived usefulness refers to the degree to which a person believes that using a particular system can allow them to work more quickly, fulfill its intended purpose, increase productivity, enhance effectiveness, and make the job easier, with responses ranging from “strongly disagree” to “strongly agree” [ 38 ]. Similarly, perceived ease of use indicates the degree to which a person believes that the system is easy to learn, controllable, clear and understandable, flexible, easy to become skillful, and easy to use [ 38 ]. According to the TAM, perceived usefulness and ease of use are critical factors influencing the adoption or rejection of new technologies, and it has been used to determine the likelihood of dementia caregivers adopting digital technology interventions [ 39 ]; therefore, it is particularly suited for this evaluation study.

Study Objectives

This study has 2 phases to develop, evaluate, and refine the Olera.care web-based platform. Phase I aims to develop the Olera.care platform, identify the caregiving challenges and needs of informal caregivers, and pilot-test the performance of the platform among a group of dementia caregivers. Phase II aims to understand the perceived usefulness and ease of use of the Olera.care platform among 3 different sociodemographic groups of informal caregivers of people living with dementia: African American, Hispanic or Latinx, and non-Hispanic White American.

Study Overview

The study consists of 2 phases of platform development and evaluation ( Textbox 1 ). Phase I of the study involves identifying caregiving challenges and needs, developing the Olera.care web-based platform, and pilot-testing the usability of the platform in a group of dementia caregivers. Phase II aims to evaluate the Olera.care platform among 3 of the largest racial and ethnic groups in Texas and iteratively develop the Olera.care platform. Human subjects research was critiqued by the National Institutes of Health reviewers with key comments ( Multimedia Appendix 1 ) addressed including clarifying risks and benefits for human participants in informed consents, along with privacy protection protocols.

Phase I: Platform development and pilot test

Development of platform (Build Stage)

  • Task 1: Compile care resources and educational materials
  • Task 2: Build a logic decision tree for personalized recommendations
  • Task 3: Design user questionnaire to collect user characteristics and develop prototypes
  • Task 4: Develop a web-based application

Pilot testing of the platform in caregivers (n=30)

  • Qualitative research: Identify caregiving challenges and needs for the platform using one-on-one interviews with caregivers
  • Quantitative research: Collect user characteristics and preliminarily evaluate the platform using modified Mobile Application Rating Scale

Phase II: Iterative platform development and evaluation

Iterative development of the platform

  • Expand care resources database outside of Texas
  • Update educational materials of Alzheimer disease and Alzheimer disease–related dementia
  • Integrate phase I feedback to platform features

Usability study across different racial or ethnic groups (n=300)

  • Develop a comprehensive evaluation survey using the technology acceptance model
  • Enroll and instruct caregivers to complete tasks:1. Generate a personalized caregiving checklist2. Identify and save 3 relevant resources to “favorite” list3. Register for “recommendation of the Day” and complete technology acceptance survey (TAS)4. Four-week interaction with the platform followed by a second completion of the TAS
  • Evaluate perceived usefulness and ease of use of the platform across caregiver characteristics

Platform Development

The development of the platform (Olera.care) adopts a build-measure-learn framework to prioritize the needs and preferences of caregivers for people living with dementia, using the Design Thinking product design methodology [ 40 ]. This iterative development process consists of three major steps: (1) prototypes will be developed to address preidentified user needs [Build Stage], (2) product usability will be tested [Measure Stage], and (3) key lessons will be identified for changes in next product iteration [Learn Stage]. The platform uses user feedback and a self-improving DCPA to provide caregivers with tailored educational information on the legal, financial, and estate planning aspects of dementia caregiving, as well as details on relevant local care and support options. DCPA considers care recipients’ stage of disease, financial circumstances, and care preferences to create unique recommendations. As a result, when the platform is in use, information is presented in a personalized and user-friendly manner that enhances the experience of caregivers when searching online for Alzheimer disease and AD-related dementia (AD/ADRD) support and reduces time spent sorting through general search engine results. We are improving the DCPA algorithm by leveraging LLMs and personalized AI agents. These AI agents will enhance our ability to provide real-time support, guidance, and information to caregivers to find dementia care services. By learning from caregivers’ interactions with Olera.care, these AI agents will continuously improve in delivering personalized and contextually relevant assistance.

The platform is developed to be a free web-based service to assist caregivers in connecting them with educational and professional assistance. The educational materials ( Figures 1 and 2 ) are created in collaboration with subject matter experts from local Area Agencies on Aging and Texas A&M University Center for Community Health and Aging. In addition, the National Institute on Aging, the American Association of Retired Persons, and the Texas Alzheimer’s Research and Care Consortium offer detailed informational materials on the legal and financial considerations for our targeted caregiver groups, which we frequently reference. Information on local care (ie, non–medical home care, assisted living, and nursing home) options ( Figure 3 ) is gathered from public knowledge bases such as websites of Medicare and the websites of local Area Agencies on Aging (eg, Brazos Valley Area Agency on Aging, Harris County Area Agency on Aging, and Area Agency on Aging of the Capital Area). The primary objective of this step is to compile a comprehensive library of information on the functional, legal, financial, and estate planning aspects of AD/ADRD caregiving, along with a detailed list of care facilities in Texas.

The educational content gathered was organized after compilation. This process involved dividing the information into clear, manageable segments and classifying each segment into one of three categories: (1) legal, financial, or other services; (2) home care; and (3) senior communities. Once categorized, a logical decision tree was created to match each piece of information with specific combinations of caregiver characteristics provided by the user. The algorithm for this process was developed using Python (Python Software Foundation) due to its straightforward syntax, extensive open-source libraries, and capability for efficient data manipulation [ 41 ]. In the final website-based application, caregiver characteristics will serve as input criteria, including factors such as the dementia stage of the care recipient, the caregiver’s knowledge and skills, the availability of external resources, the care recipient’s insurance status, financial capacity for care, and family support presence. These criteria will be collected when users create their accounts for the first time, determining the personalized information and recommendations from each of the 3 categories that will be presented to the caregiver. The logic decision tree was ultimately converted into a user-friendly website-based platform for caregivers to access customized educational materials (eg, videos, articles) and relevant care options (eg, at-home care, older adults living arrangements, paying for care options, caregiver support). Google’s Firebase is employed for backend programming to enable seamless data synchronization, user authentication, and secure data management. The front-end user interface is built using Google’s Flutter software development kit, facilitating cross-platform development for Android, iOS, and web applications from a single codebase. This approach streamlines the development process and reduces long-term maintenance costs.

research proposal service evaluation

Phase I: Mixed Methods Study—Needs Assessment and Pilot Testing

The phase I study is designed in a sequential mixed methods approach to understand the caregiving challenges and needs of caregivers and assess the platform’s usability with a pilot panel of AD/ADRD caregivers. The mixed-methodological design aims to enhance the depth and richness of data interpretation. The qualitative research serves as a need assessment, providing comprehensive information on the caregiving needs and expectations for the digital platform. Subsequently, the quantitative research evaluates the platform’s usability and usefulness for caregiving and identifies areas for platform enhancements.

The qualitative part involves semistructured interviews with caregivers, focusing on financial and legal challenges related to caregiving, unmet needs in financial management and legal planning, and expectations for a caregiving support platform. These interviews are recorded and transcribed for qualitative data analysis using thematic analysis in a framework approach [ 42 ]. Participants were also invited to complete a survey including questions about sociodemographic and caregiving characteristics, as well as questions on the usage and preferences for older adult care services.

A panel of dementia caregivers was invited to test the prototype web-based application by performing specific tasks, such as searching for financial and legal information. The study team is responsible for monitoring task completion, recording the status and time taken, and taking observation notes. After the prototype test, participants are invited to complete a usability survey to rate the perceived usefulness, ease of use, and overall rating of the web-based platform in supporting their caregiving needs. The usability of the platform was assessed using a modified MARS, which evaluates functionality, design, information quality, and engagement on a 5-point Likert scale [ 35 ]. The development goal is a MARS score of 3.6 or higher [ 43 ] among the pilot group of caregivers. The collected information will enable the development team to learn from the user feedback and plan for continuous product enhancement.

Phase II: Usability Evaluation Study Among Diverse Racial and Ethnic Groups

The phase II study is a usability study that evaluates the acceptance of the platform among the 3 largest racial and ethnic groups in Texas (Black or African American, Hispanic or Latinx, and non-Hispanic Whites) from a diverse socioeconomic spectrum. Participants who are interested in the study will complete a short interest form consisting of 4 questions to provide contact information. Participants will then be contacted to complete an intake form via Qualtrics consisting of eligibility screening questions; an informed consent form; questions on their demographic, socioeconomic, and caregiving characteristics; and an assessment of care receipt’s dementia stage using a validated 8-item screening instrument, the Ascertain Dementia-8 [ 44 ]. Next, eligible participants can select either to explore the platform and complete the rest of the study on their own for greater flexibility or to schedule a guided session with 1 of the study staff over Zoom (Zoom Video Communications, Inc). With instructions from the study staff, participants will create an account on the Olera.care platform and generate a personalized caregiver checklist based on their needs by answering a few questions. After setting up their account, participants will explore and use the platform for 4 weeks. At the end of the 4 weeks, participants are invited to provide feedback by filling out an evaluation survey of the platform, the Technology Acceptance Survey, which was developed based on the TAM, a well-established framework for assessing the perceived usefulness and ease of use of information technologies [ 38 , 45 ].

Participant Eligibility and Recruitment

To be eligible for phase I, participants must meet the following criteria: (1) be the primary unpaid caregiver for a person living with dementia, (2) provide a minimum of 10 hours of care per week to a person living with dementia who has not been institutionalized, (3) be an adult child, spouse, or family member of the person living with dementia, (4) have concerns about or perceive the need for additional information on financial management and legal planning for caregiving in Texas, and (5) have access to a smartphone or computer with internet connectivity. Paid formal caregivers will be excluded from this study. For the phase II study, the study participants must meet an additional criterion: be of Caucasian, African American, or Latino or Hispanic descent. Study participants are recruited in collaboration with the Center for Community Health and Aging at Texas A&M University and the Brazos Valley Area Agency on Aging. Web-based advertisements, emails, in-person presentations, and network recruitments are used to recruit participants. Potential participants will be invited to complete an eligibility assessment survey, and eligible participants will be invited to enroll in the study.

Ethical Considerations

Human subject research approval (institutional review board [IRB] number: IRB2021-0943 D) was obtained from the Institutional Review Board at Texas A&M University, with phase I study approved on December 8, 2021, and phase II study approved on June 21, 2023. Electronic informed consents are obtained from study participants before study activities. Participants’ personal identifying information (eg, names, emails, phone numbers) was used solely for contacting purposes. We protected participants’ privacy and confidentiality by limiting access to IRB-approved team members, separating identifying information from deidentified study data, encrypting study information on Microsoft OneDrive, and deleting identifying information upon project completion. Participants in phase I received a US $25 e-gift card for completing both interviews and surveys, and phase II participants will receive up to US $50 e-gift card for completing the study.

Sample Size Calculations

The phase 2 research hypothesis is that there will be a difference in the perceived usefulness and ease of use between different racial and ethnic groups. For the sample size calculation, we assumed a minimum power of 80% and a type I error rate of 5%. Due to the limited research on the influence of racial or ethnic factors on caregiving technology acceptance, we estimated the necessary sample size across a range of effect sizes ( f ²=0.02-0.35). Our analysis model will adjust for caregivers’ sociodemographic characteristics, caregiving duration, and the dementia stage of the care recipient. Depending on the effect sizes, the required sample size varies from approximately 40 to 550 participants [ 46 ]. Previous studies have shown low retention rates among informal caregivers for individuals with AD/ADRD, around 50% [ 47 , 48 ], and even lower retention rates among Hispanic or Latinx caregivers [ 47 ]. By following recommended recruitment and retention strategies and drawing on the recruitment experience of Texas A&M Center for Community Health and Aging in various community and clinical projects, we estimate that a minimum total sample size of 300 for enrollment surveys (100 per group) with 150 for conclusion surveys (50 per group) is necessary. This minimum sample size of 150 caregivers is feasible and can detect a small effect size of 0.09.

Data Analysis

For phase I data, thematic analysis of qualitative interview transcript data was conducted using the framework method [ 42 , 49 ]. The framework method consists of several essential steps: transcribing interviews, familiarizing oneself with the interview material, coding, developing an analytical framework, applying this framework, charting data into the framework matrix, and interpreting the data [ 42 ]. In addition, descriptive analyses of quantitative survey responses will be conducted to describe the sociodemographic and caregiving characteristics of the participants, their usage and preference for care services, and their evaluations of the platform’s functionality, design, information quality, and engagement.

For phase II survey data, descriptive statistics (mean and SD or frequency and percentage) will be used to describe the characteristics of study participants and their perceived usefulness and ease of use of Olera.care digital platform. First, using the Technology Acceptance Survey data, separate multivariable regression models will be used to examine any differences in each key outcome (perceived usefulness and ease of use of Olera digital platform) by racial or ethnic characteristics and socioeconomic status (eg, education and income level). The regression model will be adjusted for known factors that influence technology adoption (eg, age, sex, and care recipients’ dementia stage).

The study received funding from the National Institute on Aging on September 3, 2021. Ethical approval for phase I was obtained from the Texas A&M University Institutional Review Board on December 8, 2021, with data collection starting on January 1, 2022, and concluding on May 31, 2022. Phase I results were published on September 5, 2023, and April 17, 2024, respectively. On June 21, 2023, ethical approval for human subjects for phase II was granted, and participant recruitment began on July 1, 2023, and is anticipated to end by December 31, 2024. We expect to publish our phase II results by June 30, 2025. The Olera.care platform has seen promising growth since its official launch in 2023, now with more than 350 daily logins and a steady increase in organic traffic from August 6, 2023, to June 1, 2024 ( Multimedia Appendix 2 ). In addition, our Facebook “Health Aging Community” has more than 11,000 followers by June 1, 2024. The goal is to expedite this organic growth of our website and social media platforms in a way that requires no paid advertisements and is sustained only by quality content on the website attracting visitors (organic traffic). Our goal is to reach more than 30,000 caregivers daily by June 30, 2027. This will be achieved through continued investment in quality content, strategic partnerships, and community engagement.

Platform Description and Functionality

The primary function of the Olera.care platform is to provide personalized recommendations for learning materials, including articles and videos, tailored to caregivers’ specific needs. The platform also offers recommended listings of local service providers across various domains such as legal and financial advisors (eg, older adults law attorneys, certified financial planners), home care providers, senior living options, and public support services such as Meals on Wheels and Area Agencies on Aging. This personalized approach aims to simplify the web-based research process for caregivers, providing a guided experience akin to expert advice.

Key features of the Olera.care ( Figure 4 ) include the following:

  • Personalized recommendations : the DCPA customizes content based on the care recipient’s disease stage, financial circumstances, and care preferences. This ensures that caregivers receive relevant, contextually appropriate information and resources.
  • Educational materials : The platform hosts a comprehensive library of educational content created in collaboration with experts from various organizations, including the National Institute on Aging, the American Association of Retired Persons, and the Alzheimer’s Association.
  • Service provider listings : Caregivers can access a curated list of local care options, including non–medical home care, assisted living, and nursing homes, gathered from reliable public sources.
  • User-friendly interface : developed using Ruby on Rails web application framework for a seamless cross-platform experience, the platform ensures ease of use across different devices.

research proposal service evaluation

Marketing and User Growth Strategy

Since its launch, the Olera platform has focused on building organic traffic through several key strategies:

Community Building

The “Healthy Aging Community,” a web-based space hosted on Facebook, for seniors and caregivers, has become a significant marketing channel, with more than 11,000 registered members. This community fosters engagement and word-of-mouth promotion, driving new users to the platform.

Content Marketing and Search Engine Optimization

The platform’s Educational Material and Caregiver Resource repository features more than 20,000 indexed pages of high-quality content ( Multimedia Appendix 3 ). This content attracts organic traffic from users searching for relevant caregiving information, supported by digital public relations efforts and outreach to journalists, bloggers, and podcast hosts.

Strategic Partnerships:

Collaborations with organizations such as the Alzheimer’s Association and Texas A&M Center for Community Health and Aging have expanded the platform’s reach. These partnerships are crucial for user growth, with plans to engage more caregivers through connections with local Area Agencies on Aging and older adult centers.

Promotion to Service Providers

Marketing efforts also target service providers in the older adult care industry. Promotions for new providers and participation in industry conferences help build a robust network of providers on the platform. When families like a provider, a lead can be generated for that organization when we forward their contact information. Weekly leads are directly related to the volume of organic visits we get from caregivers using the platform ( Multimedia Appendix 4 ).

Study Progress by June 30, 2024

Phase I study data collection started on January 1, 2022, and ended on May 31, 2022. For participants, 822 respondents filled out prescreening surveys and 150 (18.2%) of them were qualified. Approximately 20% (30/150) of the eligible respondents participated in the in-depth interviews and completed the survey. The preliminary results of the phase I study were disseminated in a few regional or national conferences (ie, 2024 annual meeting of American Academy of Health Behavior; 2024 Texas Alzheimer’s Research and Care Consortium Symposium; 2023 Healthy Aging and Dementia Research Symposium; and 2023 annual meeting of American Association for Geriatric Psychiatry), and formal results were published on September 5, 2023 [ 8 ] and April 17, 2024 [ 50 ].

Phase II study commenced on July 1, 2023, and we aim to complete recruitment and data collection by the end of December 31, 2024, and conduct analysis and report our phase II evaluation results by June 30, 2025. By thoroughly evaluating the Olera.care digital platform across diverse sociodemographic groups, we aim to ensure that the platform is both effective and user-friendly for a broad audience. This evaluation will provide critical insights into the specific needs and preferences of different caregiver populations, enabling us to tailor the platform more precisely.

Principal Findings

This study presents an iterative approach to developing, refining, and evaluating a novel caregiver assistive technology. Current literature regarding the efficacy and acceptance of predeveloped caregiver interventions, while able to provide context regarding caregiver needs and future directions, often overlooks user feedback during the developmental phase. Using a build-measure-learn approach, the development of the Olera.care platform incorporates caregiver opinions and thoughts throughout the process. In the phase I study, we used a mixed methods approach to obtain comprehensive feedback in both qualitative and quantitative measures and examine the quality of the developed platform using the validated MARS tool. In the phase II study, we evaluated the acceptance of the technology among 3 major racial and ethnic groups using the TAM framework. This enables a comparison of measurements in perceived ease of use and usefulness, similar to another study that used the TAM framework to evaluate the acceptability of a care coordination platform [ 51 ].

A significant difference between our product and recently developed caregiver platforms is the multifaceted nature of the Olera.care platform. While many caregiving assistive technologies aim to decrease caregiver burden and strain, they mainly focus on only 1 or 2 aspects of the caregiving experience, thus limiting their use. For example, while the end goal is the same, these technologies often target specific aspects such as caregiver management [ 51 - 54 ], clinical reasoning [ 55 ], caregiver education and training [ 56 , 57 ], or caregiver mental well-being [ 58 ]. Olera.care platform serves functions in all 4 of the identified components and thus is able to serve as a simplified hub, capable of assisting caregivers in all needed aspects. For these reasons, Olera.care is a more rounded and comprehensive intervention than existing platforms.

Strengths and Limitations

One of the key strengths of the study is the build-measure-learn process. This iterative approach ensures that caregiver feedback is incorporated throughout the development phase, leading to a more user-centric design. By continuously refining the platform based on user input, the Olera.care platform is better tailored to meet the actual needs and preferences of caregivers, enhancing its usability and effectiveness. Another significant strength is the mixed methods approach in phase I, which allows for the collection of both comprehensive qualitative data on caregiving needs and expectations, as well as quantitative measures of usability and usefulness using the validated tool, MARS. The majority of similar studies have evaluated the acceptance of an intervention using only qualitative measures as opposed to quantitative [ 52 , 53 , 55 ]. Other studies have demonstrated the use of a mixed methods approach and MARS in broadening the caregiver perspective [ 54 , 56 , 59 , 60 ]. While app quality can be assessed in a multitude of ways, it is important to consider the MARS during developmental testing to prevent poor quality, especially considering that the average caregiving app was found to be “of [minimal] acceptable quality” that is “likely to be insufficient to meet care partner needs” [ 61 ]. Furthermore, this study stands out by investigating the acceptance of the technology among diverse ethnic and racial groups of caregivers in phase II. It is important to note the necessity of a diverse study population, considering the aforementioned burden disparities in racial and ethnic groups. Limitations in similar studies often arise from a lack of generalizability and consideration of a broadly representative sample [ 56 , 57 ]. Often, the unintentional focus on non-Hispanic White populations materializes in a lack of culturally tailored interventions [ 62 ]. This study will help bridge this gap by focusing on the needs and feedback across racial or ethnic populations.

The study has several limitations typical to research in this field of study. Recruiting racially and ethnically diverse caregiver participants, specifically from the 3 largest groups in Texas: non-Hispanic White, Hispanic or Latinx, and African American individuals requires intense efforts. Despite significant efforts to collaborate with communities and local agencies for recruitment, achieving a truly representative sample remained challenging. In addition, we were unable to include other racial or ethnic groups such as Asian American, Pacific Islander, Indian American, and Multiracial American. This omission is noteworthy as it potentially excludes populations that could offer important insights. However, the decision to focus on the 3 major groups was based on examining differences across these groups and considering the feasibility of recruiting caregivers in Texas. In addition, while our sample was racially diverse, it may not have been socioeconomically diverse and representative of all residents in Texas and the United States. Therefore, it is important to compare our participants’ characteristics with Texas caregiver profiles and national caregiver profiles to determine the representativeness of our recruited sample. Despite measuring household income levels and self-reported financial situations, unmeasured factors may influence the acceptance and perceived usefulness of the platform. For instance, health literacy and technology literacy can significantly impact the access and use of web-based caregiving resources [ 63 , 64 ]. Another limitation is the lack of longitudinal testing to observe changes in perceived usefulness over time. This gap means we are unable to assess how perceived acceptance and ease of use might evolve as caregivers continue to use the platform and their caregiving responsibilities evolve. Longitudinal studies are essential to understanding the sustained impact and usability of interventions over time. This underscores the need for a longitudinal study design and a more comprehensive assessment of caregiver characteristics in our future studies. By addressing these limitations in future research, we can gain a deeper understanding of the long-term effectiveness and broad applicability of the platform across various demographic and socioeconomic groups.

Conclusions

This 2-phase study focuses on caregiver-based iterative development and evaluation of the quality and usability of the platform among caregivers. The development of our platform incorporates the needs and opinions of end users throughout the entire process to ensure the creation of an effective product. The platform is rigorously assessed regarding overall app quality and acceptability using validated tools in a pilot group and across diverse caregivers, respectively. The phase I results demonstrate that digital platforms, especially those offering personalized and comprehensive support, hold significant promise for supporting family caregivers of people living with dementia. By the end of the study, we expect to deliver a highly accessible digital platform designed to assist dementia caregivers in managing financial and legal challenges by linking them to personalized and contextually relevant information and resources in Texas. If the Olera.care platform demonstrates the usefulness and ease of use, we plan to collaborate with caregiving organizations to expand it nationally, addressing the needs of the growing population of dementia caregivers.

Acknowledgments

The authors express their gratitude for the support by the Center for Community Health and Aging at Texas A&M University and community partners. They are also thankful to all the study participants. This research was funded by the National Institute on Aging Small Business Innovation Research program (contract 1R44AG074116-01, Solicitation AG21-025). The findings and conclusions expressed in this study are those of the authors and do not necessarily reflect the views of the National Institute on Aging.

Data Availability

The data sets generated and analyzed during this study are not publicly available due to protection of study participants’ privacy, but de-identified data sets are available from the corresponding author on reasonable request.

Authors' Contributions

TF, LD, SL, and MGO contributed to the original proposal submitted to the National Institute on Aging. LD, QF, MNH, LF, and TF contributed to this manuscript’s first draft writing. SL and MGO contributed to supervision. All authors contributed to the critical revision of the manuscript.

Conflicts of Interest

TF and LD are executives and owners of Olera, Inc.

NIH peer review summary statement for original proposal.

Number of visitors who found Olera.care website from free traffic of search engines by June 1, 2024.

Number of pages indexed by Google by June 1, 2024.

Number of client requests by June 1, 2024.

  • NA. 2024 Alzheimer's disease facts and figures. Alzheimers Dement. May 2024;20(5):3708-3821. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wolff JL, Spillman BC, Freedman VA, Kasper JD. A national profile of family and unpaid caregivers who assist older adults with health care activities. JAMA Intern Med. 2016;176(3):372-379. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Friedman EM, Shih RA, Langa KM, Hurd MD. US prevalence and predictors of informal caregiving for dementia. Health Aff (Millwood). Oct 2015;34(10):1637-1641. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kokorelias KM, Gignac MAM, Naglie G, Rittenberg N, MacKenzie J, D'Souza S, et al. A grounded theory study to identify caregiving phases and support needs across the Alzheimer's disease trajectory. Disabil Rehabil. Apr 2022;44(7):1050-1059. [ CrossRef ] [ Medline ]
  • Alspaugh ME, Stephens MA, Townsend AL, Zarit SH, Greene R. Longitudinal patterns of risk for depression in dementia caregivers: objective and subjective primary stress as predictors. Psychol Aging. Mar 1999;14(1):34-43. [ CrossRef ] [ Medline ]
  • Caregiving in the United States 2020. AARP and National Alliance for Caregiving. May 14, 2020. URL: https://doi.org/10.26419/ppi.00103.001 [accessed 2024-06-10]
  • Nandi A, Counts N, Bröker J, Malik S, Chen S, Han R, et al. Cost of care for Alzheimer's disease and related dementias in the United States: 2016 to 2060. NPJ Aging. Feb 08, 2024;10(1):13. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fan Q, DuBose L, Ory MG, Lee S, Hoang M, Vennatt J, et al. Financial, legal, and functional challenges of providing care for people living with dementia and needs for a digital platform: interview study among family caregivers. JMIR Aging. Sep 05, 2023;6:e47577. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Brodaty H, Donkin M. Family caregivers of people with dementia. Dialogues Clin Neurosci. 2009;11(2):217-228. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Caregiving in the United States 2015. AARP and National Alliance for Caregiving. Jun 4, 2015. URL: https://aarp.org/ppi/info-2015/caregiving-in-the-united-states-2015 [accessed 2024-06-10]
  • George LK, Gwyther LP. Caregiver well-being: a multidimensional examination of family caregivers of demented adults. Gerontologist. Jun 1986;26(3):253-259. [ CrossRef ] [ Medline ]
  • Gurland BJ, Wilder DE, Lantigua R, Stern Y, Chen J, Killeffer EH, et al. Rates of dementia in three ethnoracial groups. Int J Geriatr Psychiatry. Jun 1999;14(6):481-493. [ Medline ]
  • Duran-Kiraç G, Uysal-Bozkir Ö, Uittenbroek R, van Hout H, Broese van Groenou MI. Accessibility of health care experienced by persons with dementia from ethnic minority groups and formal and informal caregivers: A scoping review of European literature. Dementia (London). Feb 2022;21(2):677-700. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Morgan E, Dyar C, Feinstein BA, Rose K. Sexual and gender minority differences in likelihood of being a caregiver and levels of caregiver strain in a sample of older adults. J Homosex. Aug 23, 2024;71(10):2287-2299. [ CrossRef ] [ Medline ]
  • Beinart N, Weinman J, Wade D, Brady R. Caregiver burden and psychoeducational interventions in Alzheimer's disease: a review. Dement Geriatr Cogn Dis Extra. Jan 2012;2(1):638-648. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • National organizations, programs, and other resources for caregivers. American Association of Retired Persons (AARP). Oct 28, 2019. URL: https://www.aarp.org/caregiving/local/info-2019/national-resources-for-caregivers.html [accessed 2024-06-09]
  • Bhargava Y, Baths V. Technology for dementia care: benefits, opportunities and concerns. J Global Health Rep. 2022;6:e2022056. [ CrossRef ]
  • Wójcik D, Szczechowiak K, Konopka P, Owczarek M, Kuzia A, Rydlewska-Liszkowska I, et al. Informal dementia caregivers: current technology use and acceptance of technology in care. Int J Environ Res Public Health. Mar 19, 2021;18(6):3167. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Astell AJ, Bouranis N, Hoey J, Lindauer A, Mihailidis A, Nugent C, et al. Technology and dementia: the future is now. Dement Geriatr Cogn Disord. 2019;47(3):131-139. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kinchin I, Edwards L, Adrion E, Chen Y, Ashour A, Leroi I, et al. Care partner needs of people with neurodegenerative disorders: What are the needs, and how well do the current assessment tools capture these needs? A systematic meta-review. Int J Geriatr Psychiatry. May 24, 2022;37(7). [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guisado-Fernández E, Giunti G, Mackey LM, Blake C, Caulfield BM. Factors influencing the adoption of smart health technologies for people with dementia and their informal caregivers: scoping review and design framework. JMIR Aging. Apr 30, 2019;2(1):e12192. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gitlin LN, Marx K, Stanley IH, Hodgson N. Translating evidence-based dementia caregiving interventions into practice: state-of-the-science and next steps. Gerontologist. Apr 2015;55(2):210-226. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mittelman MS, Bartels SJ. Translating research into practice: case study of a community-based dementia caregiver intervention. Health Aff (Millwood). Apr 2014;33(4):587-595. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Foster MV, Sethares KA. Facilitators and barriers to the adoption of telehealth in older adults: an integrative review. Comput Inform Nurs. Nov 2014;32(11):523-535. [ CrossRef ] [ Medline ]
  • Napoles AM, Chadiha L, Eversley R, Moreno-John G. Reviews: developing culturally sensitive dementia caregiver interventions: are we there yet? Am J Alzheimers Dis Other Demen. Aug 2010;25(5):389-406. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bastoni S, Wrede C, da Silva MC, Sanderman R, Gaggioli A, Braakman-Jansen A, et al. Factors influencing implementation of eHealth technologies to support informal dementia care: umbrella review. JMIR Aging. Oct 08, 2021;4(4):e30841. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989;18(3):321-325. [ CrossRef ]
  • Milella F, Russo DD, Bandini S. AI-powered solutions to support informal caregivers in their decision-making: a systematic review of the literature. OBM Geriatr. 2023;7(4):1-11. [ CrossRef ]
  • Hird N, Osaki T, Ghosh S, Palaniappan SK, Maeda K. Enabling personalization for digital cognitive stimulation to support communication with people with dementia: pilot intervention study as a prelude to AI development. JMIR Form Res. Jan 16, 2024;8:e51732. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Garcia-Ptacek S, Dahlrup B, Edlund AK, Wijk H, Eriksdotter M. The caregiving phenomenon and caregiver participation in dementia. Scand J Caring Sci. Jun 2019;33(2):255-265. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ambegaonkar A, Ritchie C, de la Fuente Garcia S. The use of mobile applications as communication aids for people with dementia: opportunities and limitations. J Alzheimers Dis Rep. 2021;5(1):681-692. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Martínez-Alcalá CI, Pliego-Pastrana P, Rosales-Lagarde A, Lopez-Noguerola JS, Molina-Trinidad EM. Information and communication technologies in the care of the elderly: systematic review of applications aimed at patients with dementia and caregivers. JMIR Rehabil Assist Technol. May 02, 2016;3(1):e6. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Burdine JN, McLeroy K, Blakely C, Wendel ML, Felix MRJ. Community-based participatory research and community health development. J Prim Prev. Apr 2010;31(1-2):1-7. [ CrossRef ] [ Medline ]
  • Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health. 1998;19:173-202. [ CrossRef ] [ Medline ]
  • Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth. Mar 11, 2015;3(1):e27. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rahimi B, Nadri H, Lotfnezhad Afshar H, Timpka T. A systematic review of the Technology Acceptance Model in Health informatics. Appl Clin Inform. Jul 2018;9(3):604-634. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nadal C, Sas C, Doherty G. Technology acceptance in mobile health: scoping review of definitions, models, and measurement. J Med Internet Res. Jul 06, 2020;22(7):e17256. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989;13(3):319-340. [ CrossRef ]
  • Mendez KJW, Budhathoki C, Labrique AB, Sadak T, Tanner EK, Han HR. Factors associated with intention to adopt mHealth apps among dementia caregivers with a chronic condition: cross-sectional, correlational study. JMIR Mhealth Uhealth. Aug 31, 2021;9(8):e27926. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Blank S. The Four Steps to the Epiphany: Successful Strategies for Products that Win. New York, NY. John Wiley & Sons; 2020.
  • Python Software Foundation. Python. 2020. URL: https://www.python.org/downloads/release/python-390/ [accessed 2024-06-09]
  • Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. Sep 18, 2013;13:117. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mandracchia F, Llauradó E, Tarro L, Valls RM, Solà R. Mobile Phone Apps for Food Allergies or Intolerances in App Stores: Systematic Search and Quality Assessment Using the Mobile App Rating Scale (MARS). JMIR Mhealth Uhealth. Sep 16, 2020;8(9):e18339. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alzheimer's Disease 8 Dementia Screening Interview (AD8). Alzheimer's Association. URL: https://alz.org/media/documents/ad8-dementia-screening.pdf [accessed 2024-06-09]
  • Holden RJ, Karsh BT. The technology acceptance model: its past and its future in health care. J Biomed Inform. Feb 2010;43(1):159-172. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cohen J. Statistical Power Analysis for the Behavioral Sciences. New York, NY. Academic Press; 2013.
  • Gallagher-Thompson D, Solano N, Coon D, Areán P. Recruitment and retention of latino dementia family caregivers in intervention research: issues to face, lessons to learn. Gerontologist. Feb 2003;43(1):45-51. [ CrossRef ] [ Medline ]
  • Shatenstein B, Kergoat M, Reid I. Issues in recruitment, retention, and data collection in a longitudinal nutrition study of community-dwelling older adults with early-stage Alzheimer's dementia. J Appl Gerontol. Mar 11, 2008;27(3):267-285. [ CrossRef ]
  • Ritchie J, Lewis J, Micholls CM, Ormston R. Qualitative Research Practice: A Guide for Social Science Students and Researchers. Washington, DC. Sage Publications; 2013.
  • Fan Q, Hoang MN, DuBose L, Ory MG, Vennatt J, Salha D, et al. The Olera.care digital caregiving assistance platform for dementia caregivers: preliminary evaluation study. JMIR Aging. Apr 17, 2024;7:e55132. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mishra RK, Park C, Momin AS, Rafaei NE, Kunik M, York MK, et al. Care4AD: a technology-driven platform for care coordination and management: acceptability study in dementia. Gerontology. 2023;69(2):227-238. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gilson A, Gassman M, Dodds D, Lombardo R, Ford Ii JH, Potteiger M. Refining a digital therapeutic platform for home care agencies in dementia care to elicit stakeholder feedback: focus group study with stakeholders. JMIR Aging. Mar 02, 2022;5(1):e32516. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bults M, van Leersum CM, Olthuis TJJ, Siebrand E, Malik Z, Liu L, et al. Acceptance of a digital assistant (Anne4Care) for older adult immigrants living with dementia: qualitative descriptive study. JMIR Aging. Apr 19, 2024;7:e50219. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Boutilier JJ, Loganathar P, Linden A, Scheer E, Noejovich S, Elliott C, et al. A web-based platform (CareVirtue) to support caregivers of people living with Alzheimer disease and related dementias: mixed methods feasibility study. JMIR Aging. Aug 04, 2022;5(3):e36975. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gitlin LN, Bouranis N, Kern V, Koeuth S, Marx KA, McClure LA, et al. WeCareAdvisor, an online platform to help family caregivers manage dementia-related behavioral symptoms: an efficacy trial in the time of COVID-19. J Technol Behav Sci. 2022;7(1):33-44. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fuller-Tyszkiewicz M, Richardson B, Little K, Teague S, Hartley-Clark L, Capic T, et al. Efficacy of a smartphone app intervention for reducing caregiver stress: randomized controlled trial. JMIR Ment Health. Jul 24, 2020;7(7):e17541. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rodriguez K, Fugard M, Amini S, Smith G, Marasco D, Shatzer J, et al. Caregiver response to an online dementia and caregiver wellness education platform. J Alzheimers Dis Rep. 2021;5(1):433-442. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Oostra DL, Vos WL, Olde Rikkert MGM, Nieuwboer MS, Perry M. Digital resilience monitoring of informal caregivers of persons with dementia for early detection of overburden: Development and pilot testing. Int J Geriatr Psychiatry. Jan 2023;38(1):e5869. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guisado-Fernandez E, Caulfield B, Silva PA, Mackey L, Singleton D, Leahy D, et al. Development of a caregivers' support platform (Connected Health Sustaining Home Stay in Dementia): protocol for a longitudinal observational mixed methods study. JMIR Res Protoc. Aug 28, 2019;8(8):13280. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zafeiridi P, Paulson K, Dunn R, Wolverson E, White C, Thorpe JA, et al. A web-based platform for people with memory problems and their caregivers (CAREGIVERSPRO-MMD): mixed-methods evaluation of usability. JMIR Form Res. Mar 12, 2018;2(1):e4. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Werner NE, Brown JC, Loganathar P, Holden RJ. Quality of mobile apps for care partners of people with Alzheimer disease and related dementias: mobile app rating scale evaluation. JMIR Mhealth Uhealth. Mar 29, 2022;10(3):e33863. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Assfaw AD, Reinschmidt KM, Teasdale TA, Stephens L, Kleszynski KL, Dwyer K. Assessing culturally tailored dementia interventions to support informal caregivers of people living with dementia (PLWD): a scoping review. J Racial Ethn Health Disparities. 2024. [ CrossRef ] [ Medline ]
  • Lapid MI, Atherton PJ, Clark MM, Kung S, Sloan JA, Rummans TA. Cancer caregiver: perceived benefits of technology. Telemed J E Health. Nov 2015;21(11):893-902. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Efthymiou A, Middleton N, Charalambous A, Papastavrou E. The association of health literacy and electronic health literacy with self-efficacy, coping, and caregiving perceptions among carers of people with dementia: research protocol for a descriptive correlational study. JMIR Res Protoc. Nov 13, 2017;6(11):e221. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Alzheimer disease
Alzheimer disease and Alzheimer disease–related dementia
artificial intelligence
Dementia Care Personalization Algorithm
institutional review board
large language model
Mobile Application Rating Scale
Technology Acceptance Model

Edited by T Leung; The proposal for this study was peer reviewed by the Special Emphasis Panels of the Risk, Prevention and Health Behavior Integrated Review Group - Training and Education for Alzheimer's disease (AD) and AD-related dementias (ADRD) Caregivers on Financial Management and Legal Planning (National Institutes of Health, USA). See the Multimedia Appendix for the peer-review report; submitted 09.07.24; accepted 13.07.24; published 07.08.24.

©Logan DuBose, Qiping Fan, Louis Fisher, Minh-Nguyet Hoang, Diana Salha, Shinduk Lee, Marcia G Ory, Tokunbo Falohun. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 07.08.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.

Proposal Development Services

Nsf education & training resources.

The National Science Foundation has a variety of programs through which they support students. Below are some of the resources we have for the NSF NRT (graduate students), REU (undergraduate cohort), and S-STEM (individual undergraduate) programs.

National Science Foundation Research Traineeship Program (NSF NRT)

Supports interdisciplinary, evidence-based traineeships that advance ways for graduate students in research-based master's and doctoral degree programs to pursue a range of STEM careers.

About NSF's NRT program

NSF NRT Resources

  • 2024 NRT Program Overview Handout
  • 2024 NRT Infosession Recording
  • 2024 NRT Infosession Slides

Research Experiences for Undergraduates (REU)

Supports intensive research by undergraduate students in any NSF-funded area of research. REU Sites engage a cohort of students in research projects related to a theme. REU Supplements engage students in research related to a new or ongoing NSF research award.

About NSF's REU program

NSF REU Resources

  • NSF REU Checklist - Please contact us if you're interested in this funding opportunity and need an updated checklist

NSF Scholarships in Science, Technology, Engineering, and Mathematics Program (S-STEM)

Supports institutions of higher education to fund scholarships for academically talented low-income students and to study and implement a program of activities that support their recruitment, retention and graduation in STEM.

About NSF's S-STEM program

  • Majors & Minors
  • About Our Faculty
  • Academic Experience
  • Academic Support
  • Graduate Programs
  • Get Involved
  • Athletics & Sports at UWEC
  • Meet Blugolds
  • Living in Eau Claire
  • Music, Arts, & Culture
  • First-Year Visits
  • Transfer Visits
  • Group Visits
  • Plan Your Trip
  • What to Expect
  • Virtual Tour Options
  • First-Year Student
  • Transfer Student
  • International Students
  • High School Special Student
  • Graduate Student
  • Other Student
  • UWEC Application
  • Contact Admissions
  • Tuition & Fees
  • Financial Aid
  • Scholarships
  • Net Price Calculator
  • University Mission
  • Campus History
  • Accreditation
  • Campus Events and Calendars
  • Collaborations and Partnerships
  • Points of Pride
  • Work at UW-Eau Claire 

Summer Research Experiences for Undergraduates

Research, scholarship, + creative activities during the summer.

This research program, funded by Blugold Commitment Differential Tuition, is intended to facilitate undergraduate research, scholarship, and creative activities during the summer. Awards include funds for student stipends, faculty stipends, supplies, services, and research travel.

*If your project involves the study of your educational practice, consider applying for the  Summer Scholarship of Teaching and Learning Scholarship .  

The ideal project would involve a student in as many aspects of the scholarly process as possible. This will look different in each discipline, but might include:

  • Identification of a question, problem, or creative or scholarly goal
  • Development of a process or approach to answer, solve or achieve it
  • Carrying out the project

However, the project should be tailored appropriately by the mentor to match the developmental level of the student. The level of independent work performed by a first-year student will typically be quite different from that of a senior student in their second year working on a project.

Projects under this program, which is funded by Blugold Commitment Differential Tuition , should lead to the presentation of results at meetings of scholarly organizations and, where possible, provide baseline data for inclusion in proposals to extramural funding agencies. As a condition of the grant, students will be expected to present their results at the annual UW-Eau Claire Celebration of Excellence in Research and Creative Activity, Provost's Honor Symposium , or the UW System Symposium for Undergraduate Research and Creative Activity . Students are also encouraged to present findings at professional conferences or meetings in their disciplines or at the  National Conference on Undergraduate Research (NCUR); travel funds for this purpose are available through the Student Travel for Presentation of Research Results program. In addition, it is not uncommon for a student to contribute to or co-author a manuscript for publication that results from their work. If the scholarly work will be ongoing, faculty are encouraged to use the results to provide baseline data for inclusion in proposals to extramural funding agencies.

Related Programs That Support Undergraduate Research:

  • Diversity Mentoring Program
  • Student-Faculty Research Collaboration

Faculty collaborating with undergraduate researchers will be responsible for overseeing all aspects of the grant award. Funds for this program will not be available until July 1. All ORSP funds should be used by October 15th or earlier. Please return all unspent funds to ORSP. The award consists of three primary categories:

Student Stipend: Summer grants are limited to the following student stipend amounts per proposal - up to $2,300 per student, with a maximum $6,900 per project. The total stipend for any individual student working on multiple projects may be limited to $3,500 in the funding cycle, depending on the availability of funds. Faculty mentors can submit multiple projects requesting full student stipend amount, awards will be determined based on the availability of funds.

Faculty Stipend:  Faculty serving as research mentors are eligible for a $2,300 stipend. Faculty involved in more than one project will only be eligible to receive a single stipend. Faculty applicants should check with their Department Chairs in the event that an overload needs to be requested if they are also teaching or being paid through the University for other work during the summer. Overload payments are not allowed if federal funds are involved in summer salary.

Supplies, Services, and Travel: In addition to the stipends, each collaborative project is eligible to receive up to $600 for supplies, services, and travel directly related to the project (not for travel to present at a conference, see Student Travel for the Presentation of Research Results ). 

Eligibility:

Faculty, academic staff, and undergraduate students engaged in research or other scholarly activities in all disciplines are encouraged to apply. Faculty and academic staff with .5 FTE or greater appointments for the next academic year, and UW-Eau Claire continuing undergraduate students planning to enroll for the Fall semester at least half-time are eligible. A graduate student may also be involved in the project as a mentor to undergraduate students. The proposal may be submitted by any member of the collaborating team. 

Deadline for Application: February 10.

*If the deadline falls on a weekend or holiday, the due date will be extended to the following Monday.

This deadline is when applications are due to chairs (or supervisors). Chairs/Supervisors are asked to ensure that proposals reach ORSP within one week of the posted deadline.

Application Process/Writing Guide:

The Summer Research Experiences for Undergraduates application (processed in BP Logix) may be initiated by the faculty mentor or a student.

Faculty mentors are encouraged to mentor students in proposal writing as appropriate to the situation. In particular, more senior students and students continuing on a project should be included in the proposal preparation process. Part of the mentoring process is to carefully review student-written proposals prior to submission. For students, the Center for Writing Excellence  can help at any stage of the writing process, from brainstorming and outlining to organizing arguments and polishing claims.

Go to the eform application for additional application information. 

Proposal Evaluation:

Primary evaluation will be based on the quality of the student research experience proposed.  Click here to see the criteria used by project reviewers . Where projects are ranked equally, preference may be given to:

  • Tenure-track faculty, especially in the first three years
  • Projects that bring in new students
  • Ongoing projects in which student and project have progressed appropriately
  • Interdisciplinary projects
  • Projects from underrepresented disciplines
  • Projects that involve students in proposal-writing
  • Projects from faculty with a good track record in research mentoring
  • The first project from a faculty member over the second or third from a faculty member in any proposal round
  • Projects that are developing promising groundwork for an extramural funding proposal

Recently Awarded Projects:

  • Summer Research Experience 2022

Office of Research and Sponsored Programs

Schofield Hall 17 105 Garfield Avenue Eau Claire , WI 54701 United States

University of Wisconsin-Eau Claire

105 Garfield Avenue  P.O. Box 4004  Eau Claire, WI 54702-4004 

715-836-4636

Office of Strategic Research Development

What_are_Cross-Functional_Teams_-_article_image

Working together to work wonders by providing proposal development support to enhance UTMB's research enterprise.

Request our services, announcements, process to receive letter of support.

June 6, 2024 • 12:36 p.m.

Research Administration proudly serves as a partner to our research faculty members, providing valuable support and resources from funding identification to project completion. This includes facilitating letters of support.

Please note that Natalia A. Glubisz , MHA, CRA, Associate Vice President for Research Administration, will be your first point of contact for any grant proposals that require a letter of support from the Provost’s Office. Documents are needed at least six weeks prior to the deadline (depending on the RFA deadline) so that there is ample time for review by numerous units. If there are any financial commitments included in the letter, this adds additional review time.

To receive a letter of support, please send the following via email to Natalia :

  • Draft letter (please make sure there are no typos or formatting errors)
  • Full budget
  • Budget justification
  • Scope of Work

Once she reviews the documents, she will inform the Provost’s Office the letter can be signed.

If you have any questions, please contact Natalia .

Right-Side Nav

Research Experts logo

  • Health Care
  • UTMB Support Areas

Discover CALS

See how our current work and research is bringing new thinking and new solutions to some of today's biggest challenges.

  • Agriculture
  • Applied Economics
  • Climate Change
  • Communication
  • Environment
  • Global Development
  • Health + Nutrition

a greenhouse with a blue sky and clouds

Moonshot Seed Grant Program

The CALS Research and Innovation Office (RIO) launched the inaugural Moonshot Seed Grant Program in the fall of 2023. The overall goal of the program is to mobilize college research to compete for various sources of large external grants and to accelerate the translation of basic discoveries into field application and commercialization in strategic areas.

Current applications requested

The CALS RIO and Cornell Agricultural Experiment Station are excited to jointly launch the 2024 CALS Moonshot Seed Grant Program and are requesting proposals aimed at harnessing Artificial Intelligence (AI) theory and technology to address critical challenges and drive impactful outcomes in the following strategic areas:

  • Controlled environmental agriculture (CEA)
  • Sustainable protein production (animal, plant, microbial and cellular proteins)
  • Improving health and preventing disease (soil, plant, animal and human)
  • Circular economy 
  • Empowering society for information acquiring and filtering.

Proposals are due by 5 p.m. on September 30, 2024.  Please be sure your submission aligns with all criteria outlined in the CALS 2024 Moonshot Seed Grant RFP . For inquiries or further information, please contact CALS RIO staff via email: cals_rio [at] cornell.edu (cals_rio[at]cornell[dot]edu) .

Program Timeline 

  • RFP Release Date: August 20, 2024
  • Proposal Submission Deadline: September 30, 2024
  • Funding Notification Date: October 15, 2024
  • Funding Start and End Dates: November 1, 2024 – October 31, 2025
  • Submit your proposal
  • Download a PDF of the RFP
  • Proposal application categories

Application instructions

  • More information

Proposal Application Categories

Four types of applications are solicited : 

  • Proposal development award: $25,000-$50,000 each
  • Patent and (or) technology development award: $15,000 each
  • Product and (or) company development award: $25,000 each
  • Supporting community uses of AI data award: $10,000 - $20,000 each

Proposal development awards

The proposal development award have two aims: 

  • help investigators generate key preliminary data missing for submissions of new and (or) renewed grants to federal and non-federal major sponsors and 
  • enable a group of investigators (with the lead PI in CALS) to develop interdisciplinary, large-sized grant applications. 

Awardees are required to submit major external grant applications within one year.

Patent and (or) technology development awards

The patent and (or) technology development awards will support innovators in generating the necessary data to file a patent application and conduct essential testing of a prototype technology (e.g., seeds, software, procedures, etc.). These tests aim to attract potential licensees for the technology. Applicants for this grant must already have a developed prototype and/or have submitted an invention disclosure to the Cornell Center for Technology Licensing (CTL), which will assess the innovation's patentability. Successful applicants are expected to file a patent application and transfer (license) the technology within one year.

Product and (or) company development awards

The product and (or) company development awards will support the creation of startup companies by CALS faculty and new product development by these companies. Applicants must have recently established a startup company or will establish one within one year to commercialize Cornell-patented technology.

Community uses of AI Data awards

The community uses of AI Data awards will support the development and application of AI tools and methods to empower communities. These tools will help partner communities effectively manage the overwhelming flow of information from various sources. Specifically, the awards will fund projects that enable communities to sort, filter and evaluate information, allowing them to make informed, evidence-based decisions on economic, ecological, political and personal matters. Projects should include an evaluation of their success in achieving these goals. The results of this evaluation should then be used to apply for external grants that focus on community engagement, outreach or broader impact.  

Each application package should include the following items (Font size: 11 or greater, Arial or Times New Roman, single-spaced, and no less than 0.5 margins, all sides.):

  • Cover Page : (one page) Proposal title; PI and co-PI name, position, and contact information; and type of seed grant you are applying for.
  • Project Description  (two pages): Include project background and rationale, specific aims, experimental design or work plan, research or technical approaches, and expected outcomes and their relevance to the target seed grant mission. Please elaborate the key components or results that you intend to obtain from the proposed project to achieve your goal (e.g., submitting a large grant, acquiring a patent, licensing a technology or creating a company). Explain how AI will be integrated into the proposed project and how interdisciplinary collaborations will enhance project outcomes.
  • Signed Budget and Justification  (one page):  This document must be signed by a Department Chair or Unit Director.  Please list major project milestones, timeline for each of the milestones and corresponding budgetitems (categories such as personnel, supplies, per diem, etc.). Briefly justify the budget.
  • Current Biosketch of PI and co-PI  (NIH, NSF, or USDA format,   one-to-three pages per person, highlighting relevant expertise and experience).
  • Please assemble all pages and submit your application as a PDF via  this link.
  • Late submissions will not be accepted or considered.
  • Note: Projects selected to be funded jointly by Cornell AES and RIO will need to prepare and submit a Hatch project initiation form, which must be approved by USDA NIFA to access funding. 

More Information

Eligibility of applicants.

All CALS tenure-track and research, teaching and extension faculty members are eligible to apply. Each applicant may serve as PI on one application and co-PI on another application (a maximal of 2 proposals per applicant). 

Applications that bring together AI with other CALS research domains with extraordinary potential may also be considered.

Budget & project requirements

Award funds should be used for the proposed research and innovation activity, and the use is flexible at the PI’s discretion. Some restrictions apply, namely the funds cannot be used for faculty (summer) salary, major equipment over $5,000 or lab renovations.

Awardees will be required to submit a semi-annual progress report and a final report to the RIO Awardees are expected to share their long-term or significant outcomes and accomplishments associated with the funded projects.

Evaluation criteria

Each application will be assessed based on: 

Innovation and integration of AI applications in the target research area Scientific merit, feasibility and clarity of project goals and approaches; probability of the project to attract large external grant support Potential of the project to innovate and commercialize new technology Potential of the project to make an immediate and significant community impact.

COMMENTS

  1. Practical guidance on undertaking a service evaluation

    This article describes the basic principles of evaluation, focusing on the evaluation of healthcare services. It emphasises the importance of evaluation in the current healthcare environment and ...

  2. PDF Best Practice in the Ethics and Governance of Service Evaluation

    Introduction Mechanisms and structures for governance and ethical review of research (including evaluative research) are well established1. For service evaluation this remains at best locally driven, variable or absent.

  3. PDF Service evaluation, audit and research: what is the difference?

    Service evaluation seeks to assess how well a service is achieving its intended aims. It is undertaken to benefit the people using a particular healthcare service and is designed and conducted with the sole purpose of de ning or judging the current service.2 fi The results of service evaluations are mostly used to generate information that can be used to inform local decision-making.

  4. PDF RFP Writing: Evaluation & Selection Criteria

    Understand how adding transparency to your evaluation criteria can help proposers gain better clarity on how the evaluation team will assess their proposals. Learn how tailored, well-organized proposal submittal requirements can facilitate an easier evaluation process.

  5. Service evaluation: A grey area of research?

    The National Health Service in the United Kingdom categorises research and research-like activities in five ways, such as 'service evaluation', 'clinical audit', 'surveillance', 'usual practice' and 'research'. Only activities classified as 'research' require review by the Research Ethics Committees. It is argued, in ...

  6. Service evaluation, audit and research: what is the difference

    Knowing the difference between health service evaluation, audit and research can be tricky especially for the novice researcher. Put simply, nursing research involves finding the answers to questions about "what nurses should do to help patients," audit examines "whether nurses are doing this , and if not, why not,"1 and service evaluation asks about "the effect of nursing care on ...

  7. (PDF) Service evaluation: A grey area of research?

    The National Health Service (NHS) in the United Kingdom categorises research and research-like activities. in five ways, such as 'service evaluation', 'clinical audit', 'surveillance ...

  8. Practical guidance on undertaking a service evaluation

    The essential features of preparing for an evaluation are considered, and guidance provided on working ethically in the NHS. It is important to involve patients and the public in evaluation activity, offering essential guidance and principles of best practice. The authors discuss the main challenges of undertaking evaluations and offer ...

  9. PDF Developing a research evaluation framework

    Developing a research evaluation framework is growing demand internationally for research evaluation, due to an increasing emphasis on governance and accountability in both the public and private sectors. There is also greater understanding that policymaking must be based on objective evidence, and therefore a need for explicit and transparent evaluation methods.

  10. PDF Evaluation of research proposals: the why and what of the ERC's recent

    Evaluation of research proposals: the why and what of the ERC's recent changes. th. ERC's recen. changesMaria Leptin, ERC PresidentEuropean Research Council1. IntroductionThe mission of the European Research Council (ERC) is to encourage the highest quality research in Europe through competitive funding and to support investiga.

  11. How to Write a Research Proposal

    Research proposal purpose Academics often have to write research proposals to get funding for their projects. As a student, you might have to write a research proposal as part of a grad school application, or prior to starting your thesis or dissertation.

  12. Service evaluation: A grey area of research?

    The distinction between the categories of 'research' and 'service evaluation' can be a grey area. A considerable percentage of studies are considered as non-research and therefore not eligible to be reviewed by the Research Ethics Committee, which scrutinises research proposals rigorously to ensure they conform to established ethical standards ...

  13. What to expect when you're evaluating healthcare improvement: a

    Paramount among these is the inherent conflict between the requirements of systematic inquiry and data collection associated with evaluation research and the organizational imperatives of a social program devoted to delivering services and maintaining essential routine activities. 6

  14. Explore whether your project is research or service evaluation

    A service evaluation will seek to answer "What standard does this service achieve?". It won't reference a predetermined standard and will not change the care a patient receives. Where an evaluation involves patients and staff, it is best practice to seek their permission, through consent, for their participation.

  15. PDF The 3 R s: Review, Revise and Resubmit

    Sample agency and foundation evaluation requirements Overview of Social Science Research Center (SSRC) Examples of evaluations from awarded proposals How the Social Science Research Center can assist with proposals Q&A

  16. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Abstract With a lack of consensus of what evaluation is within the field of evaluation, there is a difficulty in communicating to nonevaluators what evaluation is and how evaluation differs from research. To understand how evaluation is defined, both evaluators and researchers were asked how they defined evaluation and, if at all, differentiated evaluation from research. Overall, evaluators ...

  17. Evaluation of research proposals by peer review panels: broader panels

    Abstract Panel peer review is widely used to decide which research proposals receive funding. Through this exploratory observational study at two large biomedical and health research funders in the Netherlands, we gain insight into how scientific quality and societal relevance are discussed in panel meetings. We explore, in ten review panel meetings of biomedical and health funding programmes ...

  18. Practical guidance on undertaking a service evaluation

    Abstract. This article describes the basic principles of evaluation, focusing on the evaluation of healthcare services. It emphasises the importance of evaluation in the current healthcare environment and the requirement for nurses to understand the essential principles of evaluation. Evaluation is defined in contrast to audit and research, and ...

  19. How to Write a Research Proposal in 2024: Structure, Examples & Common

    A quality example of a research proposal shows one's above-average analytical skills, including the ability to coherently synthesize ideas and integrate lateral and vertical thinking. Communication skills. The proposal also demonstrates your proficiency to communicate your thoughts in concise and precise language.

  20. Writing Your Proposal Evaluation

    Show an example of your evaluation instrument (questionnaire, experiment, face to face or telephone interviews, etc. in your report. Include at least one example of typical evaluation data (how the results of your tests for effectiveness will look). Include a budget of your evaluation (postage, phone, FAX, travel, paper, special computer ...

  21. PDF Criteria for Evaluating Research Proposals

    You are asked to evaluate a proposed study, one that has been actually submitted to the Office of Education, Bureau of' Education for the Handicappedo Your professor was one of the Office of Education consultants, evaluating that research. The decision to support or disapprove this proposal has already been me.de and, therefore, you need not be concerned that your evaluation will in any way ...

  22. 17 Research Proposal Examples (2024)

    A research proposal systematically and transparently outlines a proposed research project. The purpose of a research proposal is to demonstrate a project's viability and the researcher's preparedness to conduct an academic study. It serves as

  23. Research, service evaluation or audit?

    A well-used distinction is the following: Research is designed and conducted to generate new knowledge. Service evaluations are designed to answer the question "What standard does this service achieve?". Audits are designed to find out whether the quality of a service meets a defined standard. The HRA has devised a decision tool to help you ...

  24. How to Write a Research Proposal: (with Examples & Templates)

    Find out what a research proposal is and when you should write it. This article also has expert tips and advice on how to write one, along with research proposal examples.

  25. JMIR Research Protocols

    Development and Evaluation of a Web-Based Platform for Personalized Educational and Professional Assistance for Dementia Caregivers: Proposal for a Mixed Methods Study

  26. Large Language Model (LLM) Evaluation Research Grant

    Meta is pleased to invite university faculty to respond to this call for research proposals for LLM evaluations.

  27. NSF Education & Training Resources

    The National Science Foundation has a variety of programs through which they support students. Below are some of the resources we have for the NSF NRT (graduate students), REU (undergraduate cohort), and S-STEM (individual undergraduate) programs.

  28. Summer Research Experiences for Undergraduates

    This research program, funded by Blugold Commitment Differential Tuition, is intended to facilitate undergraduate research, scholarship, and creative activities during the summer. Awards include funds for student stipends, faculty stipends, supplies, services, and research travel.

  29. Process to Receive Letter of Support

    Natalia A. Glubisz, MHA, CRA, Associate Vice President for Research Administration, will be your first point of contact for any grant proposals that require a letter of support from the Provost's Office.

  30. Moonshot Seed Grant Proposals

    The CALS Research and Innovation Office (RIO) launched the inaugural moonshot seed grant program in the fall of 2023.