Skip to Content

Understanding "Harm" and "Discomfort" in Research and Evaluation

The fundamental differentiator in ethical risk pathways.
29 September 2025 by
Understanding "Harm" and "Discomfort" in Research and Evaluation
Gerard Atkinson

In the realms of program and policy evaluation, social research, and market research, it is crucial to distinguish between "harm" and "discomfort." These terms, while sometimes used interchangeably, represent different levels of risk and impact on participants. Understanding these differences is essential for practitioners to assess and mitigate risks effectively.

Importantly, the difference between “harm” and “discomfort” represents a key decision point for different pathways for ethical review. A foreseeable (sometimes referred to as “non-negligible”) risk of harm to stakeholders within a project requires that the project is reviewed by a full Human Research Ethics Committee. Projects where the highest foreseeable risk is discomfort are eligible for alternate pathways of review, most commonly a Lower Risk review. 

This resource is designed to help you consider your projects in terms of risks of discomfort and harm, characterise risks and likelihoods, and in doing so help you to identify which pathway is most appropriate for your project. While each project will be different in terms of the stakeholders and the risks, this approach can help clarify thinking and enable a systematic approach.

Preface

Differentiating between harm and discomfort is not straightforward. The important things to remember when developing projects and activities are:

  • Discomfort and harm sit on a spectrum.
  • The actual thresholds on that spectrum are very hard to pin down; they can be informed by:
    • context; 
    • culture; and, 
    • individual participant preferences and ways of being
  • What is important is that researchers and evaluators think about how to mitigate risks  around both discomfort and harm.
  • Research or evaluation that causes harm is not inherently unethical. In practice it is about: 
    • identifying potential risks of discomfort or harm;
    • mitigating risks wherever possible; and, 
    • justifying where harm may occur in terms of the likely benefits, especially when balancing individual harm against a broader social benefit  

This resource cannot cover all the possible risks, discomforts and harms that may arise during projects, nor can it provide an easy way to differentiate between them. But it does give some pointers on how to mitigate risks, and to consider whether something may move from one end of the spectrum to the other.

Definitions and Differences

Chapter 2.1 of the National Statement sets out that risk is a continuum in nature. However, in practical terms, it also sets out that HRECs can differentiate between projects that are higher risk with a foreseeable risk of harm, and lower risk where the highest foreseeable risk is one of discomfort.

Harm refers to any negative impact that affects the well-being of participants. This can be physical, psychological, social, or economic in nature, as well as take form in other ways. Harm is often considered more severe and long-lasting compared to discomfort. Examples of harm include physical injury, emotional trauma, social stigmatisation, or financial loss.

Discomfort, on the other hand, refers to temporary and less severe negative experiences that participants might encounter during a study or evaluation. Discomfort can include feelings of unease, minor stress, or inconvenience. While discomfort is generally short-lived and less impactful, it is still important to acknowledge and address it to maintain ethical standards.

There are a range of harms and discomforts that can arise during a project. The following table sets out some illustrative examples of each, as well as general strategies to consider to minimise risk in both cases:

Type of Harm
Example of a harm in Social Research/Evaluation
Example of a Discomfort (Not Harm)
Possible Risk Minimisation Strategies

Physical Harm

A participant experiences injury during a physical activity in a community-based intervention study.

Feeling tired after participating in a long interview or focus group.

Conduct risk assessments; ensure safe environments; provide breaks; obtain informed consent.

Psychological Harm

A participant becomes distressed after discussing past trauma in a qualitative interview.

Feeling slightly anxious when answering questions about personal beliefs or values.

Use trauma-informed approaches; offer support resources; allow withdrawal at any time.

Devaluation of Personal Worth

A participant feels humiliated due to leading questions that imply negative stereotypes.

Feeling mildly uncomfortable when asked to reflect on personal failures or challenges.

Design respectful instruments; train researchers in cultural sensitivity and ethical conduct.

Cultural Harm

A research project misrepresents Indigenous cultural practices in its findings.

Feeling uneasy when cultural practices are discussed in a way that lacks nuance but is not offensive.

Engage with cultural advisors; co-design with communities; respect cultural protocols.

Social Harm

Disclosure of sensitive information leads to stigma or damaged relationships within a community.

Feeling awkward when discussing sensitive topics in a group setting.

Ensure confidentiality; use anonymised data; avoid group settings for sensitive topics.

Economic Harm

Participants incur costs (e.g., travel, lost wages) to attend research sessions without reimbursement.

Minor inconvenience from taking time off work to participate, with no significant financial impact.

Provide reimbursements or compensation; offer flexible participation options.

Legal Harm

A participant inadvertently discloses illegal activity during an interview, leading to legal consequences.

Feeling nervous about discussing borderline legal behavior, but no action is taken or required.

Clarify limits of confidentiality; avoid probing into illegal activity unless ethically justified.

It is worth noting that the risk (both in terms of likelihood and magnitude of impact) can vary from stakeholder to stakeholder and can include non-participants such as family or broader community members. A holistic approach to assessment of risk in proposed activities is therefore necessary as part of good project planning.

What is a foreseeable risk? 

Unfortunately, the National Statement is silent on a working definition of “foreseeable”. However, as the National Statement primarily draws its ethical basis from professional norms, philosophical principles (especially utilitarianism), and common law principles, we can infer a working definition from these sources.

Australian tort law outlines a concept of “foreseeable harm”, namely whether the specific harm could reasonably be anticipated by someone of normal competence with the activities undertaken.

A legislated example of this can be found in the duty of care provisions in the Civil Liability Act (Qld) 2003 at Section 9:

(1) A person does not breach a duty to take precautions against a risk of harm unless:
​(a) the risk was foreseeable (that is, it is a risk of which the person knew or ought ​reasonably to have known); and
​(b) the risk was not insignificant; and 
​(c) in the circumstances, a reasonable person in the position of the person would ​have taken the precautions. 
(2) In deciding whether a reasonable person would have taken precautions against a risk of harm, the court is to consider the following (among other relevant things):
​(a) the probability that the harm would occur if care were not taken;
​(b) the likely seriousness of the harm;
​(c) the burden of taking precautions to avoid the risk of harm;
​(d) the social utility of the activity that creates the risk of harm.

This definition and “test” of the courts is helpful, as it gives us a framework for the identification and management of risks in an ethical context. Both project managers and reviewers can consider potential risks arising for a proposed project, identify their significance, and whether precautions are reasonable in the context of the project.

Given the above considerations, it's not necessarily obvious what is the best approach for data management in a project. The following sections look at each of the clauses and gives you some examples of where these might or might not apply in practice.

3.1.43: How are we approaching collaborative agreements?

When multiple researchers are collaborating on collection, storage and/or analysis of data or information, they should agree to the arrangements for custodianship, storage, retention and destruction of those materials, as well as to rights of access, rights to analyse/use and re-use the data or information and the right to produce research outputs based upon them. Researchers should consider whether any intellectual property will be generated by the project and agree on the ownership of any intellectual property created. Agreements on such arrangements and ownership need not necessarily be in the form of a contractual document, but should facilitate a clear resolution of these issues.

As a practical example, if a local council commissions a social impact evaluation involving external consultants and internal analysts, all parties should agree on who owns the data, who can access it, how long it is retained, and whether it can be reused in future projects. This is typically documented in the contract for services and the project plan, but may also form part of a Memorandum of Understanding and/or a shared project charter. 

3.1.44: How are we planning for data management?

For all research, researchers should develop a data management plan that addresses their intentions related to generation, collection, access, use, analysis, disclosure, storage, retention, disposal, sharing and re-use of data and information, the risks associated with these activities and any strategies for minimising those risks.
The plan should include:
  • Security measures (e.g. encrypted servers, password protection)
  • Policies and procedures (e.g. internal data handling protocols)
  • Contractual and confidentiality agreements
  • Training for team members
  • Storage format (e.g. CSV, transcripts, audio files)
  • Intended uses and disclosures
  • Access conditions
  • Information to be communicated to participants
  • Consent strategy (extended, unspecified, or waived)

In practice, all projects should have a data management plan that outlines the details of how all data will be managed during and after the project. For example, a market research firm conducting a survey on consumer behaviour should document how survey data will be stored (e.g. encrypted cloud storage), who will access it (e.g. analysts only), and whether it will be reused for future trend analysis. Participants should be informed of these intentions during the consent process.

Assessing Risks in Projects 

Risk identification and management should be a standard part of all projects. It is a vital tool in ensuring good governance in all aspects of a project and provides checkpoints for reflection and monitoring by project teams and governance groups.

In the context of ethical review, practitioners can assess risks by following a systematic approach:

  1. Identify Potential Risks: Begin by identifying all possible risks associated with the project. This includes both risks harm and discomfort. Consider the nature of the project, the methods used, and the characteristics of the participants.
  2. Evaluate Severity and Likelihood: Assess the severity and likelihood of each identified risk. Severity refers to the potential impact on participants, while likelihood refers to the probability of the risk occurring. This helps in prioritising risks that need more attention.
  3. Implement Mitigation Strategies: Develop strategies to mitigate identified risks. This can include modifying study procedures, providing additional support to participants, or obtaining informed consent that clearly outlines potential risks.

A useful standard for this is that set out in ISO 31000 (noting that this is not a legal framework). While this international standard for risk management is typically applied in the context of project and organisational governance, it does provide comprehensive guidance on the identification process, along with a classification of mitigation strategies for risks.

It is worth noting that even where there is a foreseeable risk of harm or discomfort, it may be justified where the likely benefits of the project outweigh these risks, and where reasonable steps are taken to mitigate the risks (corresponding to s9(2)(c) and s9(2)(d) of the above Act) This is where the input of the ethics committee can play a useful role. The committee can help identify likely benefits and weigh these against risks, and provide practical advice on steps that can be taken to mitigate the risks. It is important that these are done in concert with the applicant so that the advice is reasonable in the context of the project and does not in itself introduce other risks or diminish the likely benefits of the project. 

It is also worth noting, as the National Statement does, that:

“such assessment inevitably involves the exercise of judgement”

Differentiating between “harm” and “discomfort”

"Harm" is generally understood as a more severe and long-lasting threat than "discomfort". A participant might experience significant discomfort without suffering any injury or lasting negative consequence. Harm, however, implies a risk that crosses the threshold into causing injury or distress.

Unfortunately, there is little information within the National Statement or in common law to reliably distinguish the two concepts beyond its level of permanence and the magnitude of its impact. 

However, philosophical ethics provides a possible further pathway to differentiate harm from discomfort. There is a concept initially articulated by Shoshana Felman and subsequently formalised by Michel Foucault called an “Ethic of Discomfort”. In this concept, discomfort is further differentiated from harm by the potential for discomfort to have a beneficial, transformative effect on the person subject to the discomfort. Notably, this discomfort is not brought about at the cost of safety, and there is a responsibility of the researcher/evaluator to ensure that safety is in place. 

Therefore we can describe a test to help differentiate something from being a harm or discomfort:

1.     Is the impact temporary in nature?

2.     Is the impact disproportionate to the objective of the activity?

3.     Is the impact necessary to enable the positive and likely benefits of the activity for the impacted person? (e.g. processing an experience, reflection, and self-awareness)

This test is by no means exhaustive, and there are other factors which we must consider such as a person's own belief systems, culture, and ways of being. 

Finally, it is important to note that just because there is a risk of harm doesn’t mean that a project is unjustified. There are some occasions where a risk of harm is justified given the likely benefits that the activity will yield. However, from an ethical review standpoint, such situations need the higher degree of scrutiny afforded by an HREC to ensure that these risks are appropriately mitigated and monitored, and to ensure that harms are minimised.

Summary

Distinguishing between harm and discomfort is essential for ethical research practice, particularly in ensuring participant welfare. While harm denotes more lasting and severe effects, discomfort can sometimes be both temporary and constructive if managed safely and responsibly. By applying a thoughtful framework that considers the nature, proportionality, and necessity of the impact, researchers and evaluators can better assess potential risks and benefits. Ultimately, this approach helps guide appropriate ethical review, safeguards participants, and empowers high quality research and evaluation.