+1(978)310-4246 credencewriters@gmail.com
  

Description

PUBH7036 Assessment 2: Health promotion intervention
evaluation plan
Due: 7 May 2021 @ 2pm via Turnitin
Weighting: 50%
Task:
This semester all three pieces of assessment will build upon each other, so it is suggested that you choose a
topic (health promotion area) that is of interest to you. You need to have your topic for the semester
approved by the course co-ordinator. For this semester, you will imagine that you are working as a Health
Promotion Officer in not for profit agency, e.g. Cancer Council Queensland. You have been asked to
manage a public health issue, e.g., workplace mental health, smoking cessation in vulnerable (pregnant
women) or hard to reach (regional/rural) populations. For this assessment you are required to develop a plan
for an impact or outcome evaluation of the intervention you proposed in assessment 1.
Elements:
Detail:
Purpose
To develop an evaluation plan for a health promotion intervention.
Text type
This assessment will be a report
Audience and your role
Your role: Health Promotion Officer
Audience: Health promotion team of the organisation where you work
Conditions
Word count: 2500 words (excluding diagrams and tables)
Font: 12 point, Times New Roman only
Margins: All margins at least 2.0cm
Line Spacing: 1.5 line spacing
Referencing style: APA style preferred. Include reference list at the end of
your assessment. Refer to UQ Library Style Guide
https://guides.library.uq.edu.au/referencing/apa7
(Use of Endnote for managing references is highly recommended)
Criteria
1. Critically appraise the evidence on the evaluation of health promotion
actions
2. Identify and apply the most relevant health promotion theories and
frameworks to guide health promotion implementation and evaluation
3. Understand evaluation frameworks and methodology for conducting a
health promotion evaluation
4. Formulate meaningful and realistic objectives to guide a comprehensive
evaluation
5. Construct comprehensive evaluation plans including formative, process,
impact and outcome evaluations
6. Formulate solutions to overcome challenges in health promotion
implementation and evaluation
7. Demonstrate effective oral and written communication skills
Submission
Via Turnitin submission link from Blackboard (‘Assessment’ –>
‘Assessment 2 – Evaluation Plan)
You can submit more than once to check for originality. However, the most
recent submission before the due date and time will be the one that is
marked.
Late submissions without approval will incur a late penalty (see ECP)
Please ensure a receipt from Turnitin in sent to you via email – this is
confirmation of your submission.
https://my.uq.edu.au/information-and-services/manage-myprogram/exams-and-assessment/applying-extension
Assessments must be submitted as a Word document. Save the file with
the filename using this format: LastName_7036_ Assessment2
Getting started:
•
Reflection on feedback from assessment one (250 words)
You are required to demonstrate how you have reflected and incorporated the feedback you received on
assessment one.
•
Description and rationale of health promotion intervention/action/program: (500 words)
Briefly describe your Health Promotion intervention/action/program*, including a rationale for why it is
important to focus on your chosen topic. In describing your intervention/action/program plan, remember
to include information on your target group/s, community, modality etc as is relevant.
*choose which word most accurately reflects what you are writing the evaluation for – intervention, action,
program.
•
Evaluation questions
Identify your key evaluation questions. You may find it useful to use a Program Logic Model, RE-AIM or
RE-AIM Quest, or some other framework as a guide here. There is no one right way of writing an
evaluation plan.
Below are some possible questions you may answer, this is not an exhaustive list, and there may be
other questions more specific to your proposal. You will need to consider what type of data you will need
to source, in order to answer those questions.
•
•
•
•
•
How much of the intervention/program will be delivered?
How will the effects/impact of the intervention/program/actions be measured?
What process will be used to ensure fidelity of the intervention/program/actions?
How will the unanticipated positive impacts/outcomes be tracked and reported?
What information will you need to collect to assess if levels of partnership and collaboration have
increased?
•
Evaluation Design (750 words)
In this section, you will need to demonstrate how you will go about answering your evaluation questions
you have outlined above, by describing the design of your evaluation. You may find it useful to have a
figure and/or tables to depict the design of the evaluation, including data sources and data collection
timeframes to help guide the reader.
Outline the following information to describe your evaluation design:
1. Methodology (qualitative, quantitative, mixed methods).
2. Methods (i.e. study design: interviews, focus-groups, RCTs, pre-post comparisons, observational).
The research methods are the procedures (or techniques) used to acquire knowledge/data.
3. Data sources involves a consideration of the following about the data we collect and how it is
collected:
a. Data collection
b. Participants
c. Recruitment strategy
d. Procedures
e. Proposed analysis
f.
Consideration of quality of data collection and analysis (e.g. consider the rigour and
trustworthiness of the data and how this will be enhanced/managed).
•
Resources to complete the evaluation (250 words)
You will need to consider the scope of your evaluation when deciding on resources for the evaluation.
Consider who will conduct the evaluation, how much time and money needs to be dedicated to the
evaluation you are proposing, what will be the outputs of the evaluation, what training might be required
(e.g. capacity building of community members), or what physical resources will be needed (computers,
online survey set up, etc).
•
Timeline
Consider the time it will take to complete all the required tasks for your proposed evaluation plan. You
may represent this in a table or a diagram, such as a Gantt chart.
•
Outcomes and Significance (750 words)
This final, but crucial, section brings together your learnings across all pieces of assessment throughout
the semester. It is intended to highlight the rationale for your intervention and the expected outcomes in
terms of their public health significance.
Briefly describe the importance of the problem to be researched (e.g. large numbers of young lowincome women smoke and there are no initiatives to help them stop smoking), the expected outcome of
your health promotion intervention/program,/action when implemented (e.g. a reduction in smoking
among young low-income women), and the significance of the evaluation (e.g. a reduction in costs to the
society, better health for young low-income women, community action to empower young women for a
particular health issue). Consider translation and dissemination assuming your intervention is successful
in the real-world. In other words, this section addresses: Why do we need this
intervention/program/action? What might we expect the evaluation to tell us? And why is that important
from a public health perspective?
PUBH7036 Assessment 2: Health promotion intervention
evaluation plan
Due: 7 May 2021 @ 2pm via Turnitin
Weighting: 50%
Task:
This semester all three pieces of assessment will build upon each other, so it is suggested that you choose a
topic (health promotion area) that is of interest to you. You need to have your topic for the semester
approved by the course co-ordinator. For this semester, you will imagine that you are working as a Health
Promotion Officer in not for profit agency, e.g. Cancer Council Queensland. You have been asked to
manage a public health issue, e.g., workplace mental health, smoking cessation in vulnerable (pregnant
women) or hard to reach (regional/rural) populations. For this assessment you are required to develop a plan
for an impact or outcome evaluation of the intervention you proposed in assessment 1.
Elements:
Detail:
Purpose
To develop an evaluation plan for a health promotion intervention.
Text type
This assessment will be a report
Audience and your role
Your role: Health Promotion Officer
Audience: Health promotion team of the organisation where you work
Conditions
Word count: 2500 words (excluding diagrams and tables)
Font: 12 point, Times New Roman only
Margins: All margins at least 2.0cm
Line Spacing: 1.5 line spacing
Referencing style: APA style preferred. Include reference list at the end of
your assessment. Refer to UQ Library Style Guide
https://guides.library.uq.edu.au/referencing/apa7
(Use of Endnote for managing references is highly recommended)
Criteria
1. Critically appraise the evidence on the evaluation of health promotion
actions
2. Identify and apply the most relevant health promotion theories and
frameworks to guide health promotion implementation and evaluation
3. Understand evaluation frameworks and methodology for conducting a
health promotion evaluation
4. Formulate meaningful and realistic objectives to guide a comprehensive
evaluation
5. Construct comprehensive evaluation plans including formative, process,
impact and outcome evaluations
6. Formulate solutions to overcome challenges in health promotion
implementation and evaluation
7. Demonstrate effective oral and written communication skills
Submission
Via Turnitin submission link from Blackboard (‘Assessment’ –>
‘Assessment 2 – Evaluation Plan)
You can submit more than once to check for originality. However, the most
recent submission before the due date and time will be the one that is
marked.
Late submissions without approval will incur a late penalty (see ECP)
Please ensure a receipt from Turnitin in sent to you via email – this is
confirmation of your submission.
https://my.uq.edu.au/information-and-services/manage-myprogram/exams-and-assessment/applying-extension
Assessments must be submitted as a Word document. Save the file with
the filename using this format: LastName_7036_ Assessment2
Getting started:
•
Reflection on feedback from assessment one (250 words)
You are required to demonstrate how you have reflected and incorporated the feedback you received on
assessment one.
•
Description and rationale of health promotion intervention/action/program: (500 words)
Briefly describe your Health Promotion intervention/action/program*, including a rationale for why it is
important to focus on your chosen topic. In describing your intervention/action/program plan, remember
to include information on your target group/s, community, modality etc as is relevant.
*choose which word most accurately reflects what you are writing the evaluation for – intervention, action,
program.
•
Evaluation questions
Identify your key evaluation questions. You may find it useful to use a Program Logic Model, RE-AIM or
RE-AIM Quest, or some other framework as a guide here. There is no one right way of writing an
evaluation plan.
Below are some possible questions you may answer, this is not an exhaustive list, and there may be
other questions more specific to your proposal. You will need to consider what type of data you will need
to source, in order to answer those questions.
•
•
•
•
•
How much of the intervention/program will be delivered?
How will the effects/impact of the intervention/program/actions be measured?
What process will be used to ensure fidelity of the intervention/program/actions?
How will the unanticipated positive impacts/outcomes be tracked and reported?
What information will you need to collect to assess if levels of partnership and collaboration have
increased?
•
Evaluation Design (750 words)
In this section, you will need to demonstrate how you will go about answering your evaluation questions
you have outlined above, by describing the design of your evaluation. You may find it useful to have a
figure and/or tables to depict the design of the evaluation, including data sources and data collection
timeframes to help guide the reader.
Outline the following information to describe your evaluation design:
1. Methodology (qualitative, quantitative, mixed methods).
2. Methods (i.e. study design: interviews, focus-groups, RCTs, pre-post comparisons, observational).
The research methods are the procedures (or techniques) used to acquire knowledge/data.
3. Data sources involves a consideration of the following about the data we collect and how it is
collected:
a. Data collection
b. Participants
c. Recruitment strategy
d. Procedures
e. Proposed analysis
f.
Consideration of quality of data collection and analysis (e.g. consider the rigour and
trustworthiness of the data and how this will be enhanced/managed).
•
Resources to complete the evaluation (250 words)
You will need to consider the scope of your evaluation when deciding on resources for the evaluation.
Consider who will conduct the evaluation, how much time and money needs to be dedicated to the
evaluation you are proposing, what will be the outputs of the evaluation, what training might be required
(e.g. capacity building of community members), or what physical resources will be needed (computers,
online survey set up, etc).
•
Timeline
Consider the time it will take to complete all the required tasks for your proposed evaluation plan. You
may represent this in a table or a diagram, such as a Gantt chart.
•
Outcomes and Significance (750 words)
This final, but crucial, section brings together your learnings across all pieces of assessment throughout
the semester. It is intended to highlight the rationale for your intervention and the expected outcomes in
terms of their public health significance.
Briefly describe the importance of the problem to be researched (e.g. large numbers of young lowincome women smoke and there are no initiatives to help them stop smoking), the expected outcome of
your health promotion intervention/program,/action when implemented (e.g. a reduction in smoking
among young low-income women), and the significance of the evaluation (e.g. a reduction in costs to the
society, better health for young low-income women, community action to empower young women for a
particular health issue). Consider translation and dissemination assuming your intervention is successful
in the real-world. In other words, this section addresses: Why do we need this
intervention/program/action? What might we expect the evaluation to tell us? And why is that important
from a public health perspective?
PUBH7036 Assessment 2: Health promotion intervention evaluation plan (50%)
Marking Criteria & Standards
7 (≥85%)
Demonstrated evidence of
exceptional achievement of
course learning outcomes.
6 (75-84%)
Demonstrated evidence of
advanced achievement of
course learning outcomes.
5 (65-74%)
Demonstrated evidence of
proficient achievement of
course learning outcomes.
4 (50-64%)
Demonstrated evidence of
functional achievement of
course learning outcomes.
3 (45-49%)
Demonstrated evidence of
developing achievement of
course learning outcomes.
2-1 (≤44%)
Absence or minimal
evidence of achievement of
course learning outcomes.
Feedback
(5%)
Excellent reflection on
feedback and excellent
description of how the
feedback was incorporated.
Very good reflection on
feedback and very good
description of how the
feedback was
incorporated.
Good reflection on
feedback and good
description of how the
feedback was
incorporated.
Satisfactory reflection on
feedback and satisfactory
description of how the
feedback was
incorporated.
Some reflection on
feedback and some
description of how the
feedback was
incorporated.
Very limited reflection on
feedback and very limited
description of how the
feedback was
incorporated.
Description
and Rationale
(10%)
Excellent rationale and
description rationale for
health promotion
intervention.
Very good rationale and
description rationale for
health promotion
intervention.
Good rationale and
description for health
promotion intervention.
Satisfactory rationale and
description for health
promotion intervention.
Some rationale and
description for health
promotion intervention.
Very limited rationale and
description for health
promotion intervention.
Aims and
Objectives and
Evaluation
Questions
(10%)
Excellent, succinct aims,
objectives and evaluation
questions.
Very clear, aims, objectives
and evaluation questions.
Clear, relevant aims,
objectives and evaluation
questions.
Mostly clear and relevant
aims, objectives and
evaluation questions.
Poor or somewhat
irrelevant aims, objectives
and evaluation questions.
Very poor aims, objectives
and evaluation questions
Evaluation
Design:
Methodology,
Methods, Data
Sources
(25%)
Excellent, clear, and concise
evaluation design,
methodology, methods.
Excellent congruence
between evaluation
questions and design.
Very clear, relevant
evaluation design,
methodology, methods.
Very clear congruence
between evaluation
questions and design.
Clear, relevant evaluation
design, methodology,
methods. Clear
congruence between
evaluation questions and
design.
Mostly clear and relevant
evaluation design,
methodology, methods.
Somewhat clear
congruence between
evaluation questions and
design.
Poor or somewhat
confused evaluation
design, methodology,
methods with some critical
errors. Poor congruence
between evaluation
questions and design.
Very poor evaluation
design, methodology,
methods with critical errors.
Little congruence between
evaluation questions and
design.
Resources and
Timeline
(15%)
Excellent and
comprehensive consideration
of resources and timeline.
Very clear and relevant
consideration of resources
and timeline.
Clear and relevant
consideration of resources
and timeline.
Mostly clear and relevant
consideration of resources
and timeline.
Poor or somewhat
inadequate consideration
of resources and timeline.
Very poor consideration of
resources and timeline.
Outcomes and
Significance
(20%)
Excellent discussion of
evaluation outcomes and
public health significance.
Excellent insight shown.
Very good discussion of
evaluation outcomes and
public health significance.
Very good insight shown.
Good discussion of
evaluation outcomes and
public health significance,
some insight shown.
Satisfactory discussion of
evaluation outcomes and
public health significance.
Some basic discussion of
evaluation outcomes and
public health significance.
Little or no discussion of
evaluation outcomes and
public health significance.
Adheres to
conventions of
written English
(word choice,
grammar,
spelling and
punctuation).
(10%)
Error-free grammar,
expression and spelling.
Excellent paragraph/
sentence structure and
compelling use of
vocabulary.
Error-free grammar,
expression and spelling.
Very good paragraph/
sentence structure.
Mostly correct grammar,
expression and spelling.
Good paragraph/ sentence
structure.
Generally correct grammar
spelling or expression
errors. Sound paragraph/
sentence structure.
relevant
Some grammar,
expression and spelling
errors. Confused
paragraph/ sentence
structure.
Many grammar, expression
and spelling errors. Very
difficult to read.
7 (≥85%)
Demonstrated evidence of
exceptional achievement of
course learning outcomes.
Referencing:
Adheres to
referencing
conventions, in
text and in
reference list.
(5%)
Excellent selection of high
quality relevant peerreviewed articles.
Meticulously correct
referencing.
6 (75-84%)
Demonstrated evidence of
advanced achievement of
course learning outcomes.
5 (65-74%)
Demonstrated evidence of
proficient achievement of
course learning outcomes.
4 (50-64%)
Demonstrated evidence of
functional achievement of
course learning outcomes.
Good selection of relevant
high quality peer-reviewed
articles.
Very minor errors in
referencing.
Appropriate selection of
peer-reviewed articles.
Minor errors in referencing.
Acceptable selection of
peer-reviewed articles.
Minor errors in referencing.
3 (45-49%)
Demonstrated evidence of
developing achievement of
course learning outcomes.
Inappropriate selection of
articles.
Consistently errors in
referencing.
2-1 (≤44%)
Absence or minimal
evidence of achievement of
course learning outcomes.
Inappropriate selection of
articles and minimal in-text
referencing and/or many
errors in referencing.
Evidence and Evaluation Guidance Series
Population and Public Health Division
Developing and Using
Program Logic:
A Guide
CENTRE FOR EPIDEMIOLOGY AND EVIDENCE
NSW Ministry of Health
Locked Mail Bag 961
North Sydney NSW 2059
Copyright © NSW Ministry of Health 2017
This work is copyright. It may be reproduced in whole or in
part for study and training purposes subject to the inclusion
of an acknowledgement of the source. It may not be
reproduced for commercial usage or sale. Reproduction for
purposes other than those indicated above requires written
permission from the NSW Ministry of Health.
SHPN (CEE) 170069
ISBN 978-1-76000-601-3 (Print)
ISBN 978-1-76000-602-0 (Online)
Contributors to the development of the guide:
Danielle Campbell, Barry Edwards, Beth Stickney,
Teresa Wozniak, Andrew Milat
Suggested citation:
Centre for Epidemiology and Evidence. Developing and Using
Program Logic: A Guide. Evidence and Evaluation Guidance
Series, Population and Public Health Division. Sydney: NSW
Ministry of Health, 2017.
Further copies of this document can be downloaded from
the NSW Health website www.health.nsw.gov.au
April 2017
2 NSW HEALTH Developing and Using Program Logic: A Guide
Contents
1. Introduction
4
2. What is program logic?
4
3. Why develop a program logic model?
5
4. When to develop program logic
5
5. Developing program logic
6
5.1 Getting started
6
5.2 Developing a program logic model
6
5.3 Representing program logic
10
6. How can program logic be used?
13
6.1 Using program logic to plan a program evaluation
13
7. Conclusion
15
8. Useful resources
15
9. References
16
NSW HEALTH Developing and Using Program Logic: A Guide 3
1. Introduction
NSW Health is committed to the development of
evidence based policies and programs and the ongoing
review and evaluation of existing programs. This guide
has been developed to support NSW Health staff in the
development of program logic and its use in informing
population health program planning, implementation
and evaluation.
This guide promotes a planned and structured approach to
developing program logic and includes information on:
•
the meaning and purpose of program logic
•
when and how to develop program logic
•
ow program logic can be used, with a particular focus
h
on planning an evaluation.
2. What is program logic?
A program logic model is a schematic
representation that describes how a program* is
intended to work by linking activities with outputs,
intermediate impacts and longer term outcomes.
Program logic aims to show the intended causal
links for a program.
Several different terms are used to describe program logic,
such as program theory, logic model, theory of change, results
chain and intervention logic.
* The NSW Government Program Evaluation Guidelines define a program as “a set of activities managed together over a sustained period of time that aim to achieve an
outcome for a client or client group” (p. 4). The Guidelines use ‘program’ to refer to policy, strategy, initiative, service or project. This guide also uses the term ‘intervention’
as an alternative to ‘program’.1
4 NSW HEALTH Developing and Using Program Logic: A Guide
3. Why develop a program logic model?
Using a program logic approach to describe a program
has many benefits. For example:
•
aving an agreed program logic model supports a
H
systematic and integrated approach to program planning,
implementation and evaluation.2
•
program logic model tells the story of how the program
A
is proposed to work. By clarifying activities and intended
outcomes, a program logic model illustrates the change
processes underlying a program.3,4
•
P rogram logic makes program assumptions explicit and
enables testing of how these assumptions are supported
by evidence.4,5
•
P rogram logic is a useful tool for engaging stakeholders
in program planning and evaluation, and clearly
communicating with stakeholder audiences about
program concepts.2 A program logic model agreed with
key stakeholders can facilitate common language about
the program and build a shared understanding of how it
will work.3,5
•
P rogram logic provides a framework for evaluating a
program by identifying areas where evaluation will be
most important, and informing the development of
meaningful evaluation questions.2-4
4. When to develop program logic
Ideally, program logic should be developed in the
program planning stage. This allows stakeholders to
articulate the desired program impacts and outcomes,
and clarify how the intervention will achieve these. Note
that program logic does not replace a program plan, but
rather informs it; a program plan generally has more
detailed steps and tasks.6
The program logic may be reviewed and refined at different
times, including during implementation and as part of
planning a program evaluation.
Program logic can also be developed for an existing program,
although this may be more difficult, particularly where the
program is complex or has multiple unrelated components.
Fitting a program logic model onto an existing program can
enable stakeholders to consider whether the outputs and
impacts identified through the program logic match what the
program is delivering, and amend the program implementation
accordingly.6
NSW HEALTH Developing and Using Program Logic: A Guide 5
5. Developing program logic
5.1 Getting started
5.2 Developing a program logic model
Developing program logic is a participatory and iterative
exercise. The NSW Government Evaluation Toolkit describes
the process as partly analytical and partly consultative.4
Analytically, it involves review of the program to identify
aims, objectives, activities and intended impacts/outcomes,
and refining and assembling these statements into a causal
chain that shows how the activities are assumed to contribute
to short-term and intermediate impacts and, ultimately, to
longer term outcomes. During this stage it is important to
closely examine and question the assumptions underlying the
program components and causal chain so that any unintended
or unforeseen consequences can be anticipated,7 and
outcomes can be fairly attributed to the program.
There are many ways to develop program logic. The
BetterEvaluation website lists several approaches, including
articulating ‘mental models’ by talking with key informants
individually or in groups about how they understand an
intervention works; SWOT analysis to assess the Strengths,
Weaknesses, Opportunities and Threats of a program
to determine how it might best be implemented; or
‘backcasting’.9
Consultatively, the process of developing program logic should
involve working with a range of stakeholders to draw on their
understanding of the program and its impacts/outcomes.
Engaging stakeholders also has the benefit of encouraging
ownership of the final program logic model.4 Decisions
about which stakeholders to involve and the nature of their
involvement will depend on how the program logic will be
used. For instance, if the program logic is intended to develop
an understanding of what is needed to make a program work,
it is important to involve program clients and partner agencies;
if the program logic will be used to design an evaluation,
program staff and management involvement is important.8
Program logic development is often undertaken in a workshop
format to engage relevant stakeholders. Alternatively or
in addition to a workshop, structured interviews may be
conducted with stakeholders to elicit their understanding
of the problem being addressed by the program, its causes
and consequences, and how the program will contribute to
addressing the problem.8
6 NSW HEALTH Developing and Using Program Logic: A Guide
Backcasting is a useful approach that involves identifying the
long-term outcomes of a program and subsequently working
backwards to identify the necessary steps required to achieve
these outcomes. The benefit of backcasting, compared to
approaches involving forecasting, is that it allows stakeholders
to consider what is needed to create the future, rather than
thinking about what is currently happening and trying to
predict the future.6
Suggested steps for developing a program logic model using
the backcasting approach are outlined as follows.
Step 1: Develop an outcomes hierarchy
An outcomes hierarchy, sometimes referred to as an
outcomes chain, shows the assumed cause-and-effect
relationships between program outcomes, from immediate
and short-term impacts to long-term outcomes. The
outcomes hierarchy is the centrepiece of the program logic
as it provides a basis for thinking about how the program
needs to function to achieve the desired outcome.8
Funnell and Rogers suggest a five-step process for
developing an outcomes hierarchy:8
i . Prepare a list of possible outcomes. It may be helpful
to think of outcomes in relation to the problem that
the program aims to address, including the causes and
consequences of the problem. For example, if the problem
is consumption of unhealthy food among children, the
intended outcomes might include “increased fruit and
vegetable intake” and “reduced consumption of energy
dense food”. If one of the causes of unhealthy food
consumption is lack of availability of healthy food, then
another intended outcome might be “increased healthier
choices in food outlets”. If a known consequence of
unhealthy food consumption is overweight, the intended
outcomes might include reduction in overweight. All
important outcomes – including any unintended outcomes
that can be predicted – should be documented.
i i. Cluster the outcomes. Outcomes that are related
(e.g. all those referring to desired changes in program
participants’ knowledge) should be grouped and given
a simple working title (e.g. “children’s knowledge about
healthy food increases”).
i ii. Arrange the outcomes in a chain of ‘if-then’
statements. The if-then approach is a useful way of
ensuring logical thinking when ordering outcomes in a
chain from short-term impacts to long-term outcomes (e.g.
“if there are healthier choices in food outlets, then children
will be more likely to purchase healthy food). It may be
helpful to identify the highest and lowest levels of outcome
first to anchor the remainder of the chain. The lowest
level in the outcomes chain is the first point at which the
program has some sort of effect on the target group, such
as ensuring participation in the program. The highest levels
are the ultimate health outcomes desired, which often
relate to reducing a problem and its consequences.
i v. Identify any feedback loops, i.e. where a higher level
outcome affects (or ‘feeds back’ into) a lower level one.
This can be important when participation in a program
(a lower level outcome) is affected by the success of the
program in achieving anticipated impacts and outcomes
(higher level outcomes) (e.g. a program that improves the
health of clients may consequently attract more clients).
v. Validate the outcomes hierarchy with key
stakeholders.
An example of an outcomes hierarchy is presented at Figure 1.
Note that the number of steps in outcomes hierarchies can vary.
Ultimately, the outcomes should correspond to program aims
and objectives.
NSW HEALTH Developing and Using Program Logic: A Guide 7
Figure 1. Example of an outcomes hierarchy for the NSW Implementation of the Healthy Workers Initiative
Reduction in the risk of chronic disease in adults in NSW
Individual behavioural change in adults in paid employment in NSW
Changes in weight status
Changes in dietary (fruit and vegetable), smoking, hazardous alcohol and physical activity behaviours
Changes in organisational
policies and practices in
workplaces to be supportive
of healthy behaviours
Use of Get Healthy@Work
Service
Use of Get Healthy
Coaching Service
Changes in attitudes,
knowledge, commitment to
workplace health promotion
Awareness of Services
(Communication & Marketing Strategy) (Get Healthy Workforce Strategy)
Awareness of Targeted Communication messages
(Communication & Marketing Strategy)
Reproduced from St George and King (2011)10
8 NSW HEALTH Developing and Using Program Logic: A Guide
Step 2: Identify the deliverables
The next step in developing a program logic model is to
consider the program deliverables. These include:
i. Outputs – the products, goods or services that need to
be provided to program participants in order for the shortterm outcomes (impacts) to be achieved.
ii. Activities – the essential actions required to produce
program outputs.
Step 3: Identify assumptions and review the
program logic
Once the model is complete, the logic underlying the
activities, outcomes and causal links should be reviewed,
and any assumptions identified. Assumptions include
beliefs about the program, how it will work, and program
participants (e.g. how they learn, how they behave, their
motivations).5
Figure 2 summarises the steps in constructing a program logic
using the backcasting approach.
Figure 2. Steps in constructing a program logic using the backcasting approach
STEP 1:
Develop an
outcomes
hierarchy
Long-term outcomes
What are the long-term outcomes of
the program? (e.g. reduced rates of
overweight and obesity)
Intermediate impacts
What intermediate impacts need to
occur before the long-term outcomes are
reached? (e.g. reduced consumption of
energy dense food)
Short-term impacts
What short-term impacts are required
in order to achieve the intermediate
impacts?
(e.g. increased healthier choices in food
outlets)
STEP 2:
Identify the
deliverables
Outputs
What products and services need to
be delivered to achieve the short-term
impacts? (e.g. fact sheets distributed,
food outlet managers attend training,
dietitian provides proactive and reactive
support)
Activities
What activities need to be undertaken
to deliver the outputs? (e.g. develop fact
sheets, develop and promote training
for food outlet managers, enlist dietitian
support)
Inputs
What resources are needed to conduct
the activities? (e.g. staff, funding,
partnerships)
STEP 3:
Identify assumptions
What assumptions are made about the link between intermediate
impacts and long-term outcomes? Are these assumptions
supported by evidence?
What assumptions are made about the link between shortterm impacts and intermediate impacts? Are these assumptions
supported by evidence?
What assumptions are made about the link between outputs
and short-term impacts? Are these assumptions supported by
evidence?
What assumptions are made about the link between activities and
outputs? Are these assumptions supported by evidence?
What assumptions are made about the link between inputs and
activities? Are these assumptions supported by evidence?
Adapted from the Evaluation Toolbox.6 Examples adapted from Wiggers et al. (2013)11
NSW HEALTH Developing and Using Program Logic: A Guide 9
5.3 Representing program logic
An effective program logic model should:8
•
P resent a coherent causal model that explains how the
program contributes to the impacts and outcomes;
•
Be logical, so that the direction of expected change
is clearly depicted and the sequential progression is
plausible;
•
Program logic can be represented in several ways including
logframe (a matrix that maps program aims, objectives,
activities and outputs against relevant indicators, data sources
and assumptions); realist matrix (a table describing program
resources, how they interact with the ‘object’ being changed,
contextual variables, and anticipated outcomes); and the
pipeline model.9
The pipeline model (also known as a ‘results chain’) is
commonly used for health programs. It depicts a program logic
as a linear process with inputs and activities at the front (left)
and outcomes at the end (right) (Figure 3).8
ommunicate clearly by focusing on key elements,
C
using design features (e.g. symmetry, alignment) and
ensuring readability.
Figure 3. Pipeline program logic model
Inputs
Activities
Outputs
Impacts
Outcomes
A simple example of a pipeline logic model is presented at
Figure 4. Two examples that link specific activities to specific
outputs and impacts are presented at Figures 5 and 6.
Figure 4. Example of a pipeline program logic model (i)
Program aim:
To reduce the prevalence of smoking among Local Health District clients
Inputs
New policy
Funding over
2 years
Staff
Activities
Staff training
package
developed
Training sessions
delivered to staff
Client resources
developed
10 NSW HEALTH Developing and Using Program Logic: A Guide
Outputs
Impacts
Smoking cessation
intervention
delivered to clients
Increased
awareness of
cessation support
services
Clients provided
with resources
Clients interested
in quitting
referred to
cessation support
services
Increased use of
cessation support
services
Quit attempts
initiated
Quit attempts
successful
Outcomes
Reduced
smoking rate
Improved health
Figure 5. Example of a pipeline program logic model (ii)
Program aim:
To reduce rates of smoking among adolescents
Inputs
Activities
Outputs
Social marketing
campaign
developed
Adolescents
are exposed to
anti-smoking
messages
New policy
Funding over 5
years
Community
partnerships
School-based
smoking
education
program
developed
Adolescents
participate
in smoking
education
program
Policy and
regulatory action
Restrictions on
tobacco sales
to minors are
enforced
Short-term
impacts
Intermediate
impacts
Adolescents
have increased
knowledge of the
harmful effects of
smoking
De-normalisation
of smoking
Outcomes
Lower rates of
smoking initiation
Reduced smoking
rate
Increased quit
rates
Improved health
Adolescents have
decreased access
to tobacco
NSW HEALTH Developing and Using Program Logic: A Guide 11
Figure 6. Example of a pipeline program logic model (iii)
Program aim:
To decrease rates of vaccine-preventable diseases among adolescents
Inputs
Activities
Outputs
Short-term
impacts
Information
developed and
distributed
through school
newsletters
Adolescents
and parents
are exposed
to information
about vaccination
and school
vaccination clinics
Increased
knowledge
of vaccinepreventable
diseases and
the benefits of
vaccination
Parent
information kits
and consent
forms developed
and distributed
Updated
immunisation
schedule
Funding for
purchase of
vaccines and
service provision
Partnerships with
education sector
Operational
protocols
developed
Regular meetings
held with
education sector
stakeholders
Vaccination
transport and
administration
equipment
purchased
Authorised Nurse
Immunisers
employed
Intermediate
impacts
Outcomes
Parents provide
signed consent
for vaccination in
school clinics
School-based
vaccination
program
endorsed by
education sector
Increased
vaccination
coverage
rates among
adolescents
Reduced rates
of vaccinepreventable
diseases
Improved health
School
vaccination clinics
scheduled
Vaccines
administered
to adolescents
in accordance
with operational
protocols
Quality assurance
audits conducted
Improved
monitoring of
vaccine safety
Adapted from Meijer and Campbell-Lloyd (2014)12
Other examples of program logic models can be viewed in
published reports11,13,14 and online (e.g. at the Community Tool
Box).
12 NSW HEALTH Developing and Using Program Logic: A Guide
6. How can program logic be used?
There are several ways in which program logic can
support action throughout the program planning,
implementation and evaluation cycle. For example:5
•
sed as a planning tool, program logic can clarify
U
the path to get from where a program is to where
stakeholders want it to be, including how investments are
linked to activities to achieve the desired results.
•
P rogram logic provides a simple, clear graphic
representation that helps communicate the intent of a
program to stakeholders, including program staff and
funders.
•
P rogram logic can inform the development of a detailed
management plan to guide program implementation
and to help monitor operations, processes and functions.
•
program logic model can facilitate effective
A
evaluation by helping to establish what to evaluate,
determine key evaluation questions, and identify relevant
information to address those questions.8
6.1 Using program logic to plan a
program evaluation
Program logic can support planning for an evaluation in
the following ways:
Determining what to evaluate
Program logic can help to identify the most important aspects
of a program to be evaluated.5 This includes not only the
intended impacts or outcomes of a program, but also aspects
of program implementation that are critical to the achievement
of these as described in the links between components of the
logic model.8
Identifying key evaluation questions
Evaluation questions serve to focus an evaluation and
provide direction for the collection and analysis of data.15 Key
evaluation questions should ‘fall out’ of the main components
of a program logic model and the assumptions underpinning
the components.16 This helps to convert very broad questions
of interest (e.g. “Was the program effective?”) into more
specific questions relating to particular elements of the causal
pathway (e.g. “How effective was the program in helping
smokers to initiate quit attempts?”).8 Table 1 lists some
examples of evaluation questions drawn from program logic
components.
Table 1. Examples of evaluation questions drawn from program logic components
Program logic component
Evaluation question
Type
Example
Type
Program activity
Training sessions
delivered to staff
Process
Anticipated impact
Quit attempts initiated
Impact
Anticipated
outcome
Reduced smoking rate
Outcome
Example
How many training sessions were delivered?
How many staff members participated in the training?
How satisfied were participants with the training?
What proportion of clients initiated quit attempts?
Were there differences in quit attempts across client groups?
How have smoking rates changed over time?
In which population groups have smoking rates changed?
In cases where program stakeholders have diverse and
disparate information needs (or ‘shopping lists’), program
logic is particularly useful for formulating sensible sets of
hierarchical questions.8
NSW HEALTH Developing and Using Program Logic: A Guide 13
Identifying information needed to answer evaluation
questions
Program logic can help in identifying the most appropriate
information to answer evaluation questions by assisting
in defining what constitutes program ‘success’. Success
may be described in terms of the attributes of a program
activity, output or impact/outcome (e.g. quality, quantity,
reach, timeliness, cost), and/or how it compares with agreed
standards or targets.8
For example, an anticipated program outcome of “workers
make sustained healthy lifestyle behaviour change” may
be described in relation to attributes of the workers (e.g.
age, gender), the specific changes in behaviour desired (e.g.
physical activity, diet), and how ‘sustained’ is defined (e.g.
number of months post-intervention). In this example success
might be interpreted as “increased vegetable intake at
12-months post-intervention compared with baseline among
male workers aged 30 to 45 years”.
The success criteria will determine what information is relevant
and useful for answering the evaluation questions, which
in turn will inform the selection of data collection sources,
methods and instruments.5
14 NSW HEALTH Developing and Using Program Logic: A Guide
Helping to decide when to collect data
Because a program logic model depicts the expected sequence
of activities, outputs, impacts and outcomes, it can inform
decisions about the appropriate time to assess processes,
shorter-term impacts and longer-term outcomes. For example,
program logic can help to identify critical preconditions for
achieving outcomes, so that an outcome evaluation is not
undertaken until it is clear that the preconditions have been
met.8
Providing a mechanism for ensuring acceptability among
stakeholders
A program logic model can help to ensure program
stakeholders’ views concerning specific issues are kept in
perspective in an evaluation by clarifying how these views
relate to the overall program.8
7. Conclusion
This guide aims to support the development of program
logic models and their use in planning program
evaluations.
There are many ways to develop program logic models. This
guide suggests an analytical and consultative approach that
may be helpful for NSW Health staff involved in population
health program planning, implementation and evaluation.
Ideally program logic should be developed in the program
planning stage, although it can also be developed for an
existing program.
A program logic model can support action throughout the
program planning, implementation and evaluation cycle.
Program logic can be particularly useful for facilitating a
shared understanding of how a program is proposed to
work and what it is expected to achieve, and for helping to
focus a program evaluation by identifying what to evaluate,
key evaluation questions, and information to address those
questions.
8. Useful resources
• NSW
Government Evaluation Toolkit
http://www.dpc.nsw.gov.au/programs_and_services/policy_
makers_toolkit/evaluation_toolkit
Agency for Clinical Innovation Understanding Program
Evaluation: An ACI Framework
www.aci.health.nsw.gov.au/__data/assets/pdf_
file/0008/192437/Framework-Program-Evaluation.pdf
• NSW
• BetterEvaluation:
Develop Programme Theory
http://betterevaluation.org/en/rainbow_framework/define/
develop_programme_theory
• Victorian
Department of Human Services:
Understanding program logic
http://docs.health.vic.gov.au/docs/doc/Understandingprogram-logic
NSW HEALTH Developing and Using Program Logic: A Guide 15
9. References
1. NSW Department of Premier and Cabinet. NSW
Government Program Evaluation Guidelines. Sydney: NSW
Department of Premier and Cabinet; 2016.
9. BetterEvaluation. Develop Programme Theory/Logic
Model. Available online: http://betterevaluation.org/plan/
define/develop_logic_model
2. WK Kellogg Foundation. Logic Model Development Guide.
Battle Creek, Michigan: WK Kellogg Foundation; 2004.
10. St George A, King L. Evaluation Framework for NSW
Implementation of Healthy Workers Initiative (Revised
June 2011). Sydney: Physical Activity Nutrition and Obesity
Research Group, University of Sydney; 2011.
3. NSW Agency for Clinical Innovation. Understanding
Program Evaluation: An ACI Framework. Sydney: Agency
for Clinical Innovation; 2013.
4. NSW Department of Premier and Cabinet. Evaluation
Toolkit. Available online: https://www.dpc.nsw.gov.au/toolsand-resources/evaluation-toolkit/
5. Holt L. Understanding program logic. Victorian
Government Department of Human Services; 2009.
Available online: http://docs.health.vic.gov.au/docs/doc/
Understanding-program-logic
6. National Centre for Sustainability. Evaluation Toolbox:
Program Logic. Swinburne University of Technology.
Available online: www.evaluationtoolbox.net.au
7. Parker R, Lamont A. Evaluating Programs (CAFCA
Resource Sheet). Australian Institute of Family Studies;
2010. Available online: https://www3.aifs.gov.au/cfca/
publications/evaluating-programs
8. Funnell SC, Rogers PJ. Purposeful Program Theory:
Effective Use of Theories of Change and Logic Models.
Hoboken: Wiley; 2011.
16 NSW HEALTH Developing and Using Program Logic: A Guide
11. Wiggers J, Wolfenden L, Campbell E, Gillham K, Bell C,
Sutherland R, et al. Good for Kids, Good for Life, 20062010: Evaluation Report. Sydney: NSW Ministry of Health;
2013.
12. Meijer D, Campbell-Lloyd S. NSW School Vaccination
Program. Health Protection Report, NSW: Communicable
Diseases. December 2014. Available online: http://www.
health.nsw.gov.au/hpr/Pages/201412.aspx
13. Centre for Oral Health Strategy NSW. Smoking Cessation
Brief Intervention at the Chairside: The Role of Public Oral
Health Services. Sydney: NSW Ministry of Health; 2013.
14. Cultural and Indigenous Research Centre Australia. Culture
Health Communities Activity Challenge Evaluation. Sydney:
NSW Ministry of Health; 2014.
15. Owen JM. Program evaluation: forms and approaches. 3rd
edition. New York: The Guilford Press; 2007.
16. Roughley A. Developing and Using Program Logic in
Natural Resource Management: User Guide. Canberra:
Commonwealth of Australia; 2009.
SHPN (CEE) 170069
PUBH7036
Impact and Outcome Evaluation for an Online Anxiety Prevention Program for
Adolescents in South East Queensland Schools
Description and rationale for the health promotion program
The health promotion research team from the University of Queensland (UQ) Public Health Unit
have developed a digital school-based program in partnership with headspace to target anxiety in
adolescents aged 13-17 years. The program was informed by a literature review which
investigated existing online youth anxiety programs in Australia. The program will be piloted
across South East Queensland (SEQ) in 2019-20, with prospects to replicate in other regions.
Students are required to complete one 30 minute session/week during class for six weeks, plus
two booster sessions at one and three months. The self-administered modules utilise principles of
cognitive behavioural therapy, along with a suite of online resources that can also be accessed
outside of school hours through the website or app. At commencement of the program, students
select the program stream they most closely identify with: 13-15 year olds, 16-17 year olds,
LGBTIQ+ individuals and Aboriginal and Torres Strait Islander adolescents. This ensures the
content is both age and culturally appropriate. Outreach teams from the local headspace centre
will conduct two workshops on-site for real-life applications of program modules.
Whilst the evidence individually supports digital CBT, school-based programs, digital mental
health programs, and engagement of external health providers, there is limited research on the
combined effect of these program components (Calear et al., 2016; Calear, Christensen,
Mackinnon, Griffiths, & O’Kearney, 2009; Manicavasagar et al., 2014; Spence, Donovan,
March, Kenardy, & Hearn, 2017). With increasing rates of anxiety disorders and anxiety-related
deaths by suicide in Australian youth, there is a growing need to provide highly accessible and
efficacious supports to prevent escalation of anxiety symptoms into severe mental health
conditions (Australian Bureau of Statistics, 2018; Australian Institute of Health and Welfare
(AIHW), 2016).
PUBH7036
Evaluation aim
Determine if a school-based digital mental health program can effectively reduce anxiety in
adolescents aged 13-17 years in South East Queensland.
Proximal evaluation objectives
•
Determine if an online mental health programs can effectively reduce anxiety symptoms
in adolescents through a school setting
•
Identify barriers and enablers to success of such a program to,
•
Develop recommendations for future implementation and replication in other headspace
catchments
•
Generate a short report for the youth mental health sector on the implications for practice
Distal evaluation objectives
•
Make mental health supports more accessible for adolescents in Australia
•
Prevent the escalation of anxiety symptoms for adolescents at risk of mental health
conditions
•
Reduce the severity and frequency of anxiety symptoms for adolescents experiencing
anxiety
•
Challenge damaging assumptions and practices in mental health to allow for more
innovative approaches to support youth
Evaluation questions
The impact and outcomes identified in the program logic (Figure 1) were used to aid the
development of the evaluation questions.
1. Did the program increase access to evidence-based online mental health support for
adolescents?
2. Did the program lead to early intervention for adolescents at risk of developing anxiety?
PUBH7036
3. Did the program improve mental health outcomes for students with existing anxiety
symptoms?
4. Did the partnership with headspace add value to the program?
5. Did outcomes differ across the four streams?
6. Is the program model able to be replicated in other regions across Australia?
Figure 1: Program logic for the digital school-based anxiety prevention program
Inputs
Activities
•Funding
•headspace
partnership
•Website and
app
development
•UQ research
team
•Online
automated data
collection
•ICT and admin
support
•Robust
governance
mechanisms
•Information
package for
participating
schools
•Evaluation
partner
•6 x 30min
sessions + 2
boosters
•Eligibility: 1317, high school
student, SEQ
•Escalation
protocol of
children
identified atrisk
•Evaluation
activities (data
collection,
analysis,
reporting)
•Governance
activities
•Consent and
enrollment
Outputs
•# schools agreeing
to participate
•# students
enrolled
•% of students
completing all
sessions
•% of students
referred to
counsellor
•# of headspace
centre partners
•# of quality
assurance and
governance
activities
•Quarterly reports
•Short sector report
•12 month
evaluation report
Impacts
Outcomes
•Improved selfmanagement of
mental health
•Increase
resilience
during stressful
life events
•Contribute to
the evidence
base for digital
youth mental
health
programs
•Make mental
health support
more
accescible for
adolescents
•Improved
mental
wellbeing
•Prevent
escalation of
mild symptoms
into severe
anxiety
disorders
•Reduced need
for more
intensive clinical
services
•Reduced
severity and
frequency of
anxiety
symptoms
PUBH7036
Evaluation design
Qualitative data will be collection at 12 months to supplement the triangulation and analysis of
quantitative data. The findings will be reported against the respective evaluation questions.
Methods
A mixed methods approach will be used, incorporating quantitative and qualitative research. The
online program will also function as a data collection platform to track how students are
accessing the program and administer the anxiety outcomes measure. This information will be
used for quarterly reporting to the funder and participating schools.
Individual face-to-face semi-structured interviews with past participants will be conducted to
gain a deeper understanding about their experiences around accessibility of mental health support
and how it impacted on their mental wellbeing.
All students will have the opportunity to provide input through the online survey, which offers an
anonymous and discrete option for students to provide feedback. The survey aims to fill
information gaps that are not covered by other data collection methods, through a mix of
multiple choice and open-ended questions.
Focus groups will be conducted with teachers and school counsellors to explore how the program
supported student mental health and identify barriers to implementation.
Methodology
The evaluation will use a mixed methods approach, more specifically a sequential explanatory
design (Bishop, 2015; Ivankova, Creswell, & Stick, 2006). Qualitative data will be collected
from a small sample of students and staff to provide a deeper explanation of the quantitative data
(Bishop, 2015; Ivankova et al., 2006).
Qualitative data integrity degradation is a common issue associated with sequential explanatory
design (Bishop, 2015). Interview responses may be interpreted as direct recounts without
consideration of individual sociocultural context or subjective understandings of mental
PUBH7036
wellbeing (Bishop, 2015). Furthermore, it is acknowledged that the positivist epistemologies
associated with traditional quantitative methods fundamentally differ from the constructivist
epistemologies of qualitative research (Bishop, 2015). For these reasons, pragmatism is the
chosen epistemology for this evaluation. Pragmatism positions reality to include physical
phenomenon as well as socially-constructed realities and subjective experiences (Gert, 2010).
Pragmatism therefore functions as a conduit between the two opposing philosophical approaches,
whilst also allowing the research paradigms to retain their original integrity (Bishop, 2015).
Participants
A preliminary service mapping exercise identified 30 secondary state schools in SEQ that fall
within a headspace centre catchment (Education Queensland, 2019). This provides the sampling
frame for this evaluation. The headspace centres that have confirmed their engagement with the
program include: Southport, Meadowbrook, Inala, Capalaba, Wooloongabba, Taringa, Ipswich,
and Nundah (headspace, 2019).
To be eligible, the institution must be a state school within the South East Queensland catchment
providing secondary education to students (13-17 years), and the school must also agree to
participate in evaluation activities as a requisite for participation.
Given resource constraints, capacity of researchers, travel time and scheduling around student
timetables, a maximum of 30 semi-structured interviews will be conducted. Pending expression
of interest, a mix of students from each of the four streams will be selected. However, the sample
size for interviews will ultimately be determined at the point of thematic saturation (Morse,
2000, 2015). Thematic saturation is defined by Morse (2000, 2015) as the comprehensiveness of
data and the repetition of ideas. One focus group will be conducted in each participating school,
with a maximum of 10 staff, including school teachers and counsellors (Guest, Namey, &
McKenna, 2017; Xerri, 2018). The survey will be open to all students in participating schools,
including those that opted not to participate and those that dropped out.
PUBH7036
Recruitment
The partnership with headspace includes an intellectual property (IP) license. headspace’s
reputation in youth mental health will be leveraged to recruit schools into the program. The
schools will be approached initially by the local headspace centre. If they express interest, an
information package and consent forms will be emailed to the school executives, with two weeks
to consider participation. If the school does not respond within two weeks, the research team will
make one follow-up phone call.
The teachers will be instructed to provide an overview of the program to their class, outlining
what participation entails. Students will be required to complete the consent form to enrol in the
program, with one week to consider participation. Although the students are considered minors,
parental consent is not deemed necessary for the program. This is based on the premise that
youth do not need parental consent to access headspace services and the program aligns with the
2019 Supporting Students’ Mental Health and Wellbeing policy for state schools (Queensland
Government, 2019). Students who opt not to participate will still be invited to the headspace
workshops for their own benefit.
In consenting to participate in the program, students agree to automated data collection through
the online program. The semi-structured interviews will be promoted through school emails for
two weeks and students can nominate themselves to participate. The researchers will then use
maximum variation purposive sampling to select students from each of the four streams, to
ensure experiences across all four groups are captured (Palinkas et al., 2015). All secondary
students in participating schools will have the opportunity to complete the online survey.
Procedures
The semi-structured interviews will be held in a private room on school premises after school
hours for discretion. They will be scheduled for 30-45min to allow the student and interviewer to
build rapport, as this can be challenging when discussing sensitive topics around mental health
(Leech, 2002). Students who complete the interview will receive a gift voucher for the school
cafeteria valued at $5, to incentivise participation in evaluation activities with minimal risk of
coercion (National Health and Medical Research Council, 2018).
PUBH7036
The focus groups will be also be conducted at school for 1.5hr at a time that is most convenient
for staff to minimise disruption to classes (Xerri, 2018).
Survey responses can be submitted anonymously through a website link that will be sent to the
students’ school email. It will be open for four weeks and can be completed on a computer or
android device. Considering the large sample size, the survey data will be used if the response
rate is greater than 15% (Manfreda, Bosnjak, Berzelak, Haas, & Vehovar, 2008).
The researchers are trained in facilitating focus groups and conducting semi-structured
interviews, with extensive experience working with youth. They are also trained in de-escalating
situations if a participant becomes distressed.
The facilitators will write field notes immediately after all interviews and focus groups about the
general tone of the discussion, setting, any disturbances, and other important initial impressions
(Dicicco-Bloom & Crabtree, 2006). The field notes will complement the transcriptions
throughout the thematic analysis to provide context around the conversations.
With consent from all participants involved, the focus groups and interviews will be audio
recorded to ensure all information is captured. If consent is not provided for audio recording, a
research assistant will attend the session to scribe.
Proposed Analysis
The qualitative data from the focus groups, interviews and open-ended survey questions will be
analysed using deductive thematic analysis (Fereday & Muir-Cochrane, 2006). The key
evaluation questions will be used as the framework to guide the grouping of codes into
respective themes (Fereday & Muir-Cochrane, 2006).
Stata is the statistical software that will be used to conduct all quantitative data analysis and
management. Analysis will focus on trends in mental health outcomes and engagement with
program modules over time.
Considering the mixed methods approach, the quantitative data will be triangulated with the
qualitative data to facilitate validation of the findings and construct more comprehensive multidimensional answers to the evaluation questions (Nowell, Norris, White, & Moules, 2017).
PUBH7036
Quality of data and analysis
To optimise completion rates of outcomes data, students will be sent two reminder emails to fill
out the quarterly outcomes measures. Teachers will also be prompted to allocate 15min of class
time for students to complete the measures online.
To maximise accuracy of qualitative data, audio recordings will be transcribed within two days
of collection by the same researcher who facilitated the session (MacLean, Meyer, & Estable,
2004). An assistant researcher will also conduct a secondary review of the transcription
(MacLean et al., 2004).
Thematic analysis is an iterative process that will require a number of revisions by each of the
researchers to ensure the content is themed appropriately and that the themes reflect the raw data
within the scope of the evaluation domains (Fereday & Muir-Cochrane, 2006).
Whilst deductive thematic analysis is more efficient than inductive approaches, it lacks
flexibility in interpretation and can often introduce bias (Fereday & Muir-Cochrane, 2006). In
order to address this limitation, students and staff who participated in qualitative data collection
will be invited to provide input on the write up through an online feedback platform (Qualitrics,
2019). The findings will be presented in lay language so that individuals can provide honest
feedback on the selection, interpretation, and distribution of quotes (Richards & Hemphill,
2018). The feedback will then be incorporated before the evaluation report is disseminated
(Richards & Hemphill, 2018).
Data collection
Figure 2: Data collection plan for the 12 month evaluation
Evaluation
Question
Did the project
increase access to
evidence-based
Indicator
Tool
Crude number of
modules
completed
N/A
Method of
Timing
collection
Online quantitative Completion of
data collection
program (1st
Quarter)
PUBH7036
online mental
health support for
adolescents?
Did the program
improve mental
health outcomes
for students with
existing anxiety?
Percentage of
students who
completed all
modules
Number of
website and app
views
Duration of time
spent on each
module
Students’
experiences of
accessing the
program
Rating of
accessibility vs.
other mental
health supports
Change in selfrated mental
health outcomes
over the 12
months
Proportion of
students being
escalated to more
intensive mental
health support†
Proportion of
students meeting
the clinical cut-off
for anxiety at the
1st quarter vs. 4th
quarter
Students’ mental
health and
wellbeing as a
consequence of
the program
(including
N/A
Completion of
program (1st
Quarter)
N/A
Throughout
program
N/A
Throughout
program
N/A
Interviews with
past participants
12 months
N/A
Online survey
12 months
Modified Youth
Worries and Fears
Questionnaireê™´
Online quantitative 1st – 4th Quarter
data collection
N/A
Focus groups with
teachers and
school counsellors
Modified Youth
Worries and Fears
Questionnaire
Online quantitative 1st – 4th Quarter
data collection
N/A
Interviews with
past participants
12 months
12 months
PUBH7036
unintended
outcomes)
Did the program
lead to early
intervention for
adolescents at risk
of developing
anxiety?
Proportion of
students with
anxiety symptoms
at the 1st quarter
who had mild or
no symptoms at
the 4th quarter
Number of
students meeting
threshold for
referral to the
counsellor
Benchmark
prevalence of
anxiety symptoms
against state and
national levels
Did the partnership Collaboration
with headspace add between school
value to the
staff and
program?
headspace (eg.
Referrals from
counsellor to local
headspace centre)
Rating of
usefulness the
workshops
Did outcomes
Average number
differ across the
of modules
four streams?
completed across
the streams
Average change in
self-rated mental
health across the
streams
Is the program
Key barriers,
model able to be
enablers and
replicated in other opportunities for
Modified Youth
Worries and Fears
Questionnaire
Online quantitative 1st – 4th Quarter
data collection
Modified Youth
Worries and Fears
Questionnaire
Online quantitative
data collection
N/A
Desktop review of
state and national
levels
12 months
N/A
Focus groups with
teachers and
school counsellors
12 months
N/A
Online survey
12 months
N/A
Online quantitative Throughout
data collection
program
1st – 4th Quarter
Modified Youth
Worries and Fears
Questionnaire
N/A
Focus groups with
teachers and
school counsellors
12 months
PUBH7036
regions across
Australia?
the successful
delivery of the
program
Benchmark
program with
similar models in
other regions
N/A
Desktop review
†
12 months
The school counsellor will automatically be alerted through the online system if students reach the threshold (three of
questions marked ‘very often’. Students who opt to receive counselling through the school will continue to be part of program,
however the data may be excluded from certain analyses.
ê™´ Refer to Appendix A for the Modified Youth Worries and Fears Questionnaire
PUBH7036
Resourcing
The 12 month impact and outcome evaluation will be conducted alongside the 12 month pilot
program. Three months have been allocated to complete the qualitative data collection and three
months for the final write-up, including feedback from participants. The research team will
consist of two senior researchers and two research assistants who will based out of the UQ Public
Health Research Unit. A health and social services consultancy will be engaged as an external
evaluation partner to add rigour and provide expertise throughout the data collection process.
The research assistants will be responsible supporting the logistics of evaluation activities and
reviewing data analysis. With support from the external evaluation partner, the senior researchers
will facilitate the interviews and focus groups. The research team will then work together to
complete synthesis and analysis the data.
Budget
headspace partnerships
$30,000
Web developer
$105,000
External evaluation partner
$50,000
Travel expenses, incentives, $15,000
administrative costs
Total
$200,000
Activities in scope of the evaluation
•
Liaising with schools and headspace centres regarding the program and evaluation
activities
•
Developing question guides, organising logistics and facilitating interviews and focus
groups
•
Developing question content, hosting and analysing data for online survey
•
Data extraction, cleansing, and analysing of quantitative data from online data collection
system
PUBH7036
•
Collation and synthesis of qualitative data from interviews and focus groups
•
Preparation of internal documents including quarterly reports
•
Preparation of outward facing documents for dissemination including final report and
infographics
Activities out of scope of the evaluation
•
Unreasonable pursuit of students or stakeholders for participation in data collection
•
Overseeing the mental health and wellbeing of students beyond the life of the project
•
Developing new relationships with stakeholders beyond the scope of engagement
outlined in this document
•
Extensive data collection and analysis outside the agreed time and resource constraints
Outputs
The quarterly reports will be disseminated to all participating schools, funding body and
partnering headspace centres. The final report will be published through the UQ Public Health
Unit and the headspace website. Infographics will be used to report back to the school
communities in a meaningful way. A short report will also be produced for the youth mental
health sector which summarises the findings from the evaluation and highlights the implications
for practice.
The schedule for the project it represented in Figure 3.
Figure
3: The projected timeline for the pilot program and 12 month evaluation.
PUBH7036
2019
A
Preliminary
literature review
Program proposal
Secure funding
Finalise
evaluation plan
Commence pilot
program &
evaluation
Quarterly report 1
Complete pilot
program (6 wks +
1month + 3month
boosters)
Quarterly Report
2
Quarterly Report
3
Quarterly Report
4
Conduct
qualitative data
collection
M
J
J
A
2020
S
O
N
D
J
F
M
A
M
J
J
2021
A
S
O
N
D
J
PUBH7036
Complete
evaluation &
final report
PUBH7036
Outcomes and significance
The rising burden of mental health and suicide is disproportionally higher in Australian youth
(ABS, 2018; AIHW, 2016). Whilst a range of clinical services are available, prevention is still
significantly lagging in mental health. With half of all mental health problems manifesting
during adolescence, the 13-17 year age group provides a critical window of opportunity to
intervene and reduce the risk of mental illness later in life (AIHW, 2016). In line with the shift
from reactive to proactive mental health promotion, this program offers a highly accessible
platform that is evidence-based, engaging and culturally appropriate. Implementing the program
in a school setting helps to normalise help-seeking behaviours and reinforce positive mental
health practices. Adolescents can be challenging access due to stigma, financial constraints, lack
of trust and social isolation (O’Connor, Martin, Weeks, & Ong, 2014; Paus, Keshavan, & Giedd,
2008). Thus, the partnerships with headspace centres provides an important gateway for mental
health service providers to access hard-to-reach population groups.
This evaluation offers comprehensive insights into the impact and outcomes of an online mental
health program targeting anxiety in adolescents in SEQ. Unlink previous research in this field,
the 12 month follow-up period will enable researchers to investigate if the effects of the program
are sustained longer-term. The findings from the evaluation have the potential to inform the
design and implementation of other online mental health programs targeting youth.
The evaluation will also bolster the importance of prevention as a significant investment in the
mental health and wellbeing of young people. By equipping adolescents with practical coping
strategies, they are able to build their sense of agency to overcome challenging events in their
life that would otherwise be overwhelming. In this way, young people can minimise the risk of
escalating distressing situations into more serious mental health issues.
The success stories that emerge from this evaluation will resonate with a much larger community
of young Australians that have difficulties managing their mental health amongst other
competing priorities. Sharing positive narratives from students and staff about the program can
create greater awareness about the importance of early intervention. It will also open dialogue
around the crucial role of mental health and wellbeing within the curriculum.
PUBH7036
Reference List
Australian Bureau of Statistics. (2018). Causes of Death, Australia, 2017. Retrieved from
https://www.abs.gov.au/ausstats/abs@.nsf/Lookup/by%20Subject/3303.0~2017~Main%2
0Features~Intentional%20self-harm,%20key%20characteristics~3
Australian Institute of Health and Welfare (AIHW). (2016). Mental health of Australia’s young
people and adolescents. Retrieved from https://www.aihw.gov.au/getmedia/42e2f2924ebb-4e8d-944c-32c014ad2796/ah16-5-5-mental-health-australias-young-peopleadolescents.pdf.aspx
Bishop, F. L. (2015). Using mixed methods research designs in health psychology: An illustrated
discussion from a pragmatist perspective. British Journal of Health Psychology, 20(1), 520. Retrieved from https://onlinelibrary.wiley.com/doi/abs/10.1111/bjhp.12122.
doi:10.1111/bjhp.12122
Calear, A. L., Batterham, P. J., Poyser, C. T., Mackinnon, A. J., Griffiths, K. M., & Christensen,
H. (2016). Cluster randomised controlled trial of the e-couch Anxiety and Worry program
in schools. Journal of Affective Disorders, 196, 210-217. Retrieved from
https://www.ncbi.nlm.nih.gov/pubmed/26926660. doi:10.1016/j.jad.2016.02.049
Calear, A. L., Christensen, H., Mackinnon, A., Griffiths, K. M., & O’Kearney, R. (2009). The
YouthMood Project: a cluster randomized controlled trial of an online cognitive
behavioral program with adolescents. J Consult Clin Psychol, 77(6), 1021-1032.
Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/19968379. doi:10.1037/a0017391
Dicicco-Bloom, B., & Crabtree, B. F. (2006). The qualitative research interview. Med Educ,
40(4), 314-321. doi:10.1111/j.1365-2929.2006.02418.x
Education Queensland. (2019). Schools Directory. Retrieved from
https://schoolsdirectory.eq.edu.au/
Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating Rigor Using Thematic Analysis: A
Hybrid Approach of Inductive and Deductive Coding and Theme Development.
International Journal of Qualitative Methods, 5(1), 80-92. Retrieved from
https://journals.sagepub.com/doi/abs/10.1177/160940690600500107.
doi:10.1177/160940690600500107
Gert, B. (2010). Pragmatism and the Philosophical Foundations of Mixed Methods Research. In
SAGE Handbook of Mixed Methods in Social & Behavioral Research (2 ed.). Retrieved
PUBH7036
from https://methods.sagepub.com/book/sage-handbook-of-mixed-methods-socialbehavioral-research-2e doi:10.4135/9781506335193
Guest, G., Namey, E., & McKenna, K. (2017). How Many Focus Groups Are Enough? Building
an Evidence Base for Nonprobability Sample Sizes. Field Methods, 29(1), 3-22.
Retrieved from https://journals.sagepub.com/doi/abs/10.1177/1525822X16639015.
doi:10.1177/1525822×16639015
headspace. (2019). Our Centres. Retrieved from https://headspace.org.au/headspace-centres/
Ivankova, N. V., Creswell, J. W., & Stick, S. L. (2006). Using Mixed-Methods Sequential
Explanatory Design: From Theory to Practice. Field Methods, 18(1), 3-20. Retrieved
from https://journals.sagepub.com/doi/abs/10.1177/1525822X05282260.
doi:10.1177/1525822×05282260
Leech, B. L. (2002). Asking Questions: Techniques for Semistructured Interviews. PS: Political
Science & Politics, 35(4), 665-668. Retrieved from
https://www.cambridge.org/core/article/asking-questions-techniques-for-semistructuredinterviews/E1CF8B87E87F36611AEC4D4A20468DE5.
doi:10.1017/S1049096502001129
MacLean, L. M., Meyer, M., & Estable, A. (2004). Improving accuracy of transcripts in
qualitative research. Qual Health Res, 14(1), 113-123. doi:10.1177/1049732303259804
Manfreda, K. L., Bosnjak, M., Berzelak, J., Haas, I., & Vehovar, V. (2008). Web Surveys versus
other Survey Modes: A Meta-Analysis Comparing Response Rates. International Journal
of Market Research, 50(1), 79-104. Retrieved from
https://journals.sagepub.com/doi/abs/10.1177/147078530805000107.
doi:10.1177/147078530805000107
Manicavasagar, V., Horswood, D., Burckhardt, R., Lum, A., Hadzi-Pavlovic, D., & Parker, G.
(2014). Feasibility and effectiveness of a web-based positive psychology program for
youth mental health: randomized controlled trial. J Med Internet Res, 16(6), e140.
Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/24901900. doi:10.2196/jmir.3176
Morse, J. M. (2000). Determining Sample Size. Qualitative Health Research, 10(1), 3-5.
Retrieved from https://journals.sagepub.com/doi/abs/10.1177/104973200129118183.
doi:10.1177/104973200129118183
PUBH7036
Morse, J. M. (2015). Data Were Saturated Qualitative Health Research, 25(5), 587-588.
Retrieved from https://journals.sagepub.com/doi/abs/10.1177/1049732315576699.
doi:10.1177/1049732315576699
Muris, P., Simon, E., Lijphart, H., Bos, A., Hale, W., 3rd, Schmeitz, K., . . . Adolescent Anxiety
Assessment Expert, G. (2017). The Youth Anxiety Measure for DSM-5 (YAM-5):
Development and First Psychometric Evidence of a New Scale for Assessing Anxiety
Disorders Symptoms of Children and Adolescents. Child psychiatry and human
development, 48(1), 1-17. Retrieved from
https://www.ncbi.nlm.nih.gov/pubmed/27179521
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5243875/. doi:10.1007/s10578-016-0648-1
National Health and Medical Research Council. (2018). Payment of participants in research:
information for researchers, HRECs and other ethics review bodies. Retrieved from
https://learn.uq.edu.au/bbcswebdav/pid-4485036-dt-content-rid18607967_1/courses/PUBH7000S_6920_25087/payment_of_participants_in_research_in
fo_for_researchers_hrecs_and_other_ethics_review_bodies.pdf
Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic Analysis:Striving to
Meet the Trustworthiness Criteria. International Journal of Qualitative Methods, 16(1),
1609406917733847. Retrieved from
https://journals.sagepub.com/doi/abs/10.1177/1609406917733847.
doi:10.1177/1609406917733847
O’Connor, P. J., Martin, B., Weeks, C. S., & Ong, L. (2014). Factors that influence young
people’s mental health help-seeking behaviour: a study based on the Health Belief Model.
Journal of Advanced Nursing, 70(11), 2577-2587. doi:10.1111/jan.12423
Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2015).
Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method
Implementation Research. Administration and Policy in Mental Health and Mental
Health Services Research, 42(5), 533-544. Retrieved from
https://doi.org/10.1007/s10488-013-0528-y. doi:10.1007/s10488-013-0528-y
Paus, T., Keshavan, M., & Giedd, J. N. (2008). Why do many psychiatric disorders emerge
during adolescence? Nature reviews. Neuroscience, 9(12), 947-957. doi:10.1038/nrn2513
PUBH7036
Qualitrics. (2019). Qualitrics Experience Management. Retrieved from
https://www.qualtrics.com/au/
Queensland Government. (2019). Supporting students’ mental health and wellbeing. Retrieved
from
http://ppr.det.qld.gov.au/education/learning/Procedure%20Attachments/Supporting%20St
udents%20Mental%20Health%20and%20Wellbeing/Supporting%20students%20mental
%20health%20and%20wellbeing.pdf
Richards, K. A. R., & Hemphill, M. A. (2018). A Practical Guide to Collaborative Qualitative
Data Analysis. Journal of Teaching in Physical Education, 37(2), 225-231. Retrieved
from https://journals.humankinetics.com/doi/abs/10.1123/jtpe.2017-0084.
doi:10.1123/jtpe.2017-0084
Spence, S. H., Donovan, C. L., March, S., Kenardy, J. A., & Hearn, C. S. (2017). Generic versus
disorder specific cognitive behavior therapy for social anxiety disorder in youth: A
randomized controlled trial using internet delivery. Behav Res Ther, 90, 41-57.
doi:10.1016/j.brat.2016.12.003
Xerri, D. (2018). The Use of Interviews and Focus Groups in Teacher Research. The Clearing
House: A Journal of Educational Strategies, Issues and Ideas, 91(3), 140-146. Retrieved
from https://doi.org/10.1080/00098655.2018.1436820.
doi:10.1080/00098655.2018.1436820
PUBH7036
Appendices
Appendix A: Modified Youth Worries and Fears Questionnaire (Self-Administered)
Scoring:
Participants put a check mark in one of the boxes numbered 1, 2, or 3 that best describes
how they usually feel (Muris et al., 2017). The tool has been modified to remove the last
box “Clearly more than other young people at this age”, as this is a parent-rated measure
(Muris et al., 2017).
If three or more items of the questions are scored 3, refer to counsellor. Whilst the Youth
Worries and Fears Questionnaire is not a diagnostic tool, the clinical cut-off is considered
six or more checks for ‘Very often’ (Muris et al., 2017).
Process and impact evaluation plan for the Healthy Relationships Education
in Schools (ALLIES) program in Queensland
Description and rationale
The heALthy reLationshIps Education in Schools (ALLIES) program targets the primary
prevention of relationship and dating violence among adolescents, with the long-term goal of
reducing domestic and sexual violence in adulthood. Domestic and sexual violence is a serious
and prevalent problem in Australia with substantial health, economic and social impacts
(Australian Institute of Health and Welfare, 2018). Moreover, there is evidence that young
Australians, aged 16 to 24 years, continue to hold attitudes that support violence against women
(Harris, Honey, Webster, Diemer, & Politoff, 2015; Politoff et al., 2019). Attitudes and
behaviours regarding interpersonal relationships are shaped in adolescence and healthy
relationships education in schools is a valuable opportunity to prevent current and future harm
by changing sexist attitudes, improving communication skills, developing conflict management
skills and generating a safe and inclusive culture.
The aim of ALLIES is to reduce relationship and dating violence among adolescents in
Queensland. The primary objective is to improve knowledge, attitudes and skills regarding
relationship and dating violence among Queensland high school students. Secondary objectives
include decreased violence perpetration and victimisation; school culture change; improved
student mental health and increased school attendance.
ALLIES is a 3-year program that combines elements from two interventions whose efficacy has
been demonstrated in randomised controlled trials in the United States (Coker et al., 2017;
Levesque, Johnson, Welch, Prochaska, & Paiva, 2016). The target group are adolescents (male
and female, aged 12-18 years), and the setting is high schools in Queensland. Phase 1 (Year 1)
involves 3 x 30 minute computer-based, individually-tailored healthy relationships education
sessions for all students (Levesque et al., 2016). Phase 2 (Years 2-3) involves an annual 5-hour
bystander training session for a subset of student opinion leaders, directed at proactive
intervention against peer violence (Coker et al., 2017). The modalities are online multimedia
programs for Phase 1, and face-to-face group training by staff from community domestic
violence organisations (e.g., Domestic and Family Violence Support Services, Relationships
Australia) for Phase 2. The study design is quasi-experimental and control schools will
undertake the standard relationships education curriculum. Follow-up assessments will occur at
12 months, 24 months and 36 months. A program logic model for ALLIES is shown in Figure 1.
Figure 1. ALLIES program logic model
Evaluation questions
ALLIES has not been previously implemented in Queensland so a process and impact
evaluation, using elements of the RE-AIM framework (Glasgow, Vogt, & Boles, 1999), will
address the following questions:
Q1. What are the number, proportion and representativeness of high schools that consented
to and implemented the ALLIES program? (adoption)
Q2. What are the number, proportion and representativeness of students that consented to
and completed the ALLIES program? (reach)
Q3. Was ALLIES implemented as intended? (implementation)
Q4. Were ALLIES’ primary and secondary objectives achieved at the end of the program?
(effectiveness)
Evaluation design
1. Methodology
The purpose of the evaluation is to assess the ALLIES program under real-world conditions.
Therefore a pragmatic methodology and mixed methods approach will be employed (Bauman &
Nutbeam, 2014). Quantitative and qualitative data will be collected in parallel to provide a
comprehensive appraisal of the ALLIES program.
2. Methods
Surveys – online surveys will be used to collect quantitative data from all program participants,
or as many participants as possible. The surveys will be written in plain, everyday language and
contain pre-coded questions or a Likert scale response format with options for additional
comments.
Focus groups and interviews – focus groups and semi-structured interviews will be performed
with a subset of program participants. Questions will be open-ended to encourage a range of
responses and provide opportunity to collect unanticipated information (Bauman & Nutbeam,
2014). The resulting qualitative data will be used to inform and explain the quantitative results.
Survey and focus group/interview content will be guided by the four evaluation questions (Q1Q4) outlined above, and will be pre-tested in a pilot study with a small number of students,
parents, teachers, program facilitators and school principals to ensure that it is comprehensible,
acceptable and relevant.
3. Data sources:
a. Data collection
Adoption (Q1) – all high schools in Queensland will be identified through the Queensland
Department of Education. Adoption data will include the:
•
Proportion of high schools invited to participate in the program, plus basic characteristics
such as total number of students, location (e.g., rural, urban), type (e.g., co-educational,
single-sex, government, catholic) and dropout rates,
•
Proportion and basic characteristics of high schools that agree to participate,
•
Proportion of high schools that stay in the program, and
•
Reasons for agreeing to participate or exiting the program before completion.
Data on school characteristics will be used to assess the representativeness of participating high
schools and therefore the generalisability of the findings to other schools. It will also be used to
identify common characteristics among high schools that do not agree to participate or drop out
of the program, providing an evidence base for strategies to improve adoption. Data will be
collected by program coordinators from audits of high school consent and participation, surveys,
focus groups and interviews.
Reach (Q2) – all students from participating high schools will be identified from school records
and included in the ALLIES program unless they (or their parents) elect to opt out. Reach data
will include the:
•
Proportion of students that agree to participate in the program, plus demographics such
as age, sex, ethnicity, socioeconomic position and school attendance pattern,
•
Proportion and demographics of students that opt out of the program,
•
Proportion of students that stay in the program, and
•
Reasons for exiting the program before completion.
Demographic data will be used to assess the representativeness of the student sample and to
determine whether there is equity in program reach. A previous study has shown that students atrisk of relationship and dating violence (e.g., from a low socioeconomic position or racial
minority) are more likely to be truant/drop out and be missed by school-based prevention
programs (Levesque et al., 2016). If this is the case for ALLIES, changes may be necessary to
increase reach to particular student subgroups. Data will be collected by program coordinators
from audits of student consent and program attendance, surveys and focus groups.
Implementation (Q3) – data will include the:
•
Proportion of staff from community domestic violence organisations that were recruited,
completed facilitator training and facilitated bystander training in high schools
•
Proportion of students that attended healthy relationships education sessions (Phase 1)
and/or bystander training (Phase 2)
•
Proportion of program elements that were delivered as intended
o Phase 1: computer-based monitoring of student log in and completion of sessions
o Phase 2: facilitators will complete a checklist of bystander training delivery
•
Changes made to bystander training (Phase 2) in response to local circumstances,
including number, type and reasons for the changes (recorded by facilitators on the
implementation checklist)
•
Adequacy of the training and resources for program implementation (assessed by
program facilitators and high schools)
•
Facilitators and barriers to implementation (all participants)
•
Level of engagement between local community domestic violence organisations and
high schools (assessed by program facilitators and high schools)
•
Privacy or confidentiality concerns (all participants)
•
Experience of and satisfaction with the ALLIES program (all participants)
Evaluation of implementation, including fidelity, provides quality assurance and an evidence
base for program improvement. Data will be collected by program coordinators from audits of
training or program participation/completion, surveys, focus groups and interviews.
Effectiveness (Q4) – data will include:
•
Students’ understanding (e.g., concepts, causes) and attitudes on relationship and dating
violence, gender equality and respectful relationships,
•
Students’ confidence to negotiate interpersonal relationships,
•
Students’ knowledge of how and where to access counselling and support services,
•
Proportion of students that have increased confidence to discuss, or have had discussions
about, relationship and dating violence with peers or other people (e.g., parent, teacher),
•
Proportion of students that have increased confidence in taking, or have taken, pro-social
bystander action,
•
Proportion of students that have experienced violence perpetration and/or victimisation,
•
Proportion of students that have initiated contact with counsellors in school,
•
Proportion of students that have accessed external counselling and support services,
•
Information on unexpected benefits or harms, including number, type and timing (all
participants),
•
Program delivery costs (as assessed by facilitators and high schools).
To determine whether the primary and secondary objectives of the program have been met, data
will be compared between high schools that implemented the ALLIES program and high schools
that implemented the standard relationships education curriculum (control group). Data will be
collected by program coordinators from surveys, focus groups and interviews.
b. Participants
Primary program stakeholders will be students, parents, teachers and high schools. Secondary
stakeholders, directly involved in implementation, will be local community domestic violence
organisations (e.g., Domestic and Family Violence Support Services, Relationships Australia)
and their staff. Key stakeholders will be the Queensland Department of Education, Catholic
School Authorities and Queensland Secondary Principals’ Association.
c. Recruitment strategy
Ethics approval for program evaluation will be obtained prior to the commencement of any
recruitment or data collection. High schools will be engaged through the Queensland
Department of Education and Catholic School Authorities using an expression of interest
process. Local community domestic violence organisations will be engaged by the program
manager and staff will be sought to facilitate the bystander training in Phase 2. Written consent
will be obtained for program participation, including all surveys, focus groups and interviews,
except for students. Instead, an ‘opt out’ consent process will be used for students to reduce the
implementation burden on schools and teachers. Parents of students will be sent a plain language
information sheet explaining the program and a withdrawal form that can be signed and returned
if they do not wish their child to participate in the program. Students will also be reminded at the
start of each session that their participation is voluntary and they can withdraw at any time.
d. Procedures
Table 1 summarises the data collection process. Data collection tools will be modified from
those used by Kearney, Gleeson, Leung, Ollis, and Joyce (2016). Surveys will be administered
online using SurveyMonkey (https://www.surveymonkey.com). Program coordinators will
facilitate 1-hour focus group sessions at each school or conduct 30 minute semi-structured
interviews by phone. Focus groups and interviews will be audio-recorded. The number of focus
groups and interviews is not yet known — data will be collected until saturation (i.e., no new
information is being obtained). All data will be de-identified to ensure privacy and
confidentiality.
Table 1. Summary of the tools, target populations and timing of data collection for ALLIES
Data collection tool
Target population
Time point
Posttraining
Baseline
During
delivery
12
months
24
months
36
months















Student survey
All students
Parent survey
All parents
School staff &
leadership team
survey
All school staff
(e.g., teachers,
principals,
counsellors)
Facilitator survey
All facilitators
(bystander training)
Implementation
checklist
All facilitators
(bystander training)
Student focus
groups
Subset of students




Parent focus groups
Subset of parents




School staff &
leadership team
focus groups or
interviews
Subset of school
staff




Facilitators & local
community
organisations focus
groups or interviews
Facilitators & local
community
domestic violence
organisations






e. Proposed analysis
School characteristics and student demographics will be summarised using descriptive statistics.
Most of the outcome variables will be categorical (i.e., proportions) and Chi-square tests will be
used for comparisons between groups. The Chi-square test assumption of independence is
satisfied by the two group (quasi-experimental) study design. Statistical modelling (e.g., logistic
regression) will be used for multivariate analysis. A P-value < 0.05 will be considered statistically significant. Qualitative data will be analysed using inductive thematic analysis, which involves transcribing the data verbatim, systematic coding of features, and identifying and reviewing themes (Braun & Clarke, 2006). f. Data quality Surveys – data entry errors will be avoided by the use of online surveys. Survey responses will be checked for consistency between related questions. Implementation checklist – program coordinators will observe a number of sessions to check the consistency of program delivery with implementation checklists. Some sessions may also be recorded. Focus groups and interviews – all transcripts will be compared against original audio-recordings and member checking of transcripts and findings will be used to ensure accurate representation. Resources to complete the evaluation Time – a total of 4 months in addition to the 3 year ALLIES implementation period. Staff – a manager to oversee the program and two coordinators who will execute the evaluation. Funds – program facilitator payments, production of training and evaluation materials (including online surveys), office space and equipment hire. Locations – office space for evaluation staff and facilitator training, access to meeting rooms in high schools for focus groups and interviews. Resources – general office equipment (e.g., computers, phones), audio-recording equipment for focus groups and interviews, specialist software for data analysis. Timeline 3 months before ALLIES implementation • Staff recruitment (program manager & coordinators) • Infrastructure set up and preparation of program materials • Recruitment of stakeholders, including facilitators • Audits of consent • Training of facilitators • Post-training survey, focus groups, interviews ALLIES implementation (Years 1-3) • Data collection • Audits of program implementation and participation • Surveys • Focus groups • Interviews • Initiation of data analysis 1 month after ALLIES implementation • Completion of data analysis • Dissemination of findings to all stakeholders Outcomes and Significance Physical and sexual violence or emotional abuse by a current or previous partner, particularly against women, is a widespread problem in Australia that has serious health, social and economic consequences for individuals and the community (Australian Institute of Health and Welfare, 2018). Many of the attitudes and behaviours regarding gender equality and interpersonal relationships are formed during adolescence, and national surveys of young Australians have highlighted the troubling persistence of views that support violence against women (Harris et al., 2015; Politoff et al., 2019). The goal of ALLIES is to reduce the risk of current and future harm through the primary prevention of relationship and dating violence among adolescents in Queensland. Multiple, complex factors contribute to domestic and sexual violence, including at the personal, situational, social and cultural level (Heise, 1998). ALLIES targets many of these factors to change knowledge, attitudes and behaviour; however, program evaluation is necessary to determine the extent to which this has been achieved and identify areas for further improvement. In the short term, program outcomes have the potential to influence the Australian curriculum, and how respectful relationships education is taught in high schools. In the long term, ALLIES could contribute to significant enhancements in women’s health and well-being, with cumulative downstream benefits for society. References Australian Institute of Health and Welfare. (2018). Family, domestic and sexual violence in Australia. (Cat. no. FDV 2). Canberra: AIHW. Retrieved from https://www.aihw.gov.au/getmedia/d1a8d479-a39a-48c1-bbe2-4b27c7a321e0/aihw-fdv02.pdf.aspx?inline=true. Bauman, A., & Nutbeam, D. (2014). Evaluation in a nutshell: a practical guide to the evaluation of health promotion programs (2nd ed.). North Ryde, NSW: McGraw-Hill Education (Australia). Braun, V., & Clarke, V. (2006). Using thematic analysis in psy... Purchase answer to see full attachment

  
error: Content is protected !!