This piece was inspired by Dr Eugene Aidman's presentation at the Australasian Ethics Network 2025 Conference, which highlighted a significant gap in how organisations approach internal surveys.
QA is not exempt from ethical oversight
Most organisations don't consider their HR surveys, pulse surveys, or employee climate assessments to require ethical review. This is a mistake rooted in a common misconception that quality assurance (QA) activities are exempt from ethical oversight.
They're not.
The National Statement makes clear that QA activities are only exempt from review when specific conditions are met. The NHMRC's Ethical Considerations in Quality Assurance and Evaluation Activities (2014) sets out these conditions explicitly.
Most employee surveys fail to meet these conditions.
Consider some of the standard triggers in QA processes for ethical review:
- Collecting and analysing data linked to individuals
- Activities that potentially infringe on privacy or professional reputation
- Gathering information beyond routine collection
- Targeted analysis of minority or vulnerable groups separate to the overall analysis
- Non-standard* or innovative protocols
- Comparisons of cohorts
Now consider what happens in a typical organisational survey: employees are asked about their managers, their mental health, their workplace relationships, and their career aspirations. This data is often disaggregated by team, analysed longitudinally, and sometimes even used for strategic or performance management decisions. That's not routine data collection. That's research-adjacent activity with foreseeable risks of discomfort and even harm.
Chapter 4.4 and Power Dynamics
Further to the considerations of the right review pathway, the National Statement's Chapter 4.4 on "People in dependent or unequal relationships" should be front of mind for anyone designing employee surveys.** The chapter explicitly addresses research and evaluation with people in relationships involving "unequal status or a power imbalance where the researcher has or has had a position of influence or authority over research participants."
Examples given include:
- Employers or supervisors and employees
- Government authorities and service recipients
- Carers and people in residential care
This describes virtually every employee survey scenario, and most customer surveys delivered directly by government agencies and companies.
The guidance is unambiguous. Such relationships:
"...may compromise recruitment and the voluntary nature of participant decisions to provide consent" and "may also lead to an increased risk of exploitation or other harm to participants."
The risks here are not theoretical. They include:
Reidentifiability concerns:
Even with supposedly anonymous data, small team sizes, demographic filters, or verbatim comments can make individuals identifiable. This creates risks of:
- Retribution for negative feedback
- Stigmatisation based on reported mental health status
- Career impacts from perceived disloyalty
Coercion dynamics:
Making surveys "mandatory" (even when this is impractical to enforce) creates an investigatory burden for HR and fundamentally undermines consent. Making them voluntary but tracked creates pressure to participate. The National Statement is explicit at Subsection 2.2.9: "No person should be subject to coercion or pressure in deciding whether to participate."
Disclosure risks:
Honest responses about workplace culture, management failures, or personal wellbeing may expose individuals to harm. But failure to disclose due to reidentifiability fears is a form of self-censorship which undermines data quality and representativeness, defeating the survey's purpose.
Undue incentivisation:
Prize draws, published participation metrics (either within an organisation or in the case of government-wide surveys, in public), or management pressure to achieve response rates can all potentially constitute inappropriate inducements that compromise voluntary participation.
These aren't rare cases. They're risks inherent to the employee survey model.
Best Practice: What Ethical Employee Surveys Should Look Like
Ethical oversight of employee surveys doesn't mean bureaucratic obstruction. It means thinking through risks systematically and implementing appropriate safeguards. Here's what good practice looks like:
1. Functional Separation
Ideally, engage an independent third party to manage the technical infrastructure and survey delivery. This creates distance between the organisation and the data, reducing both actual and perceived risks of reidentification. Having a third party handle the distribution and analysis of data means the organisation never sees individual responses, only aggregated reports meeting minimum threshold requirements for data reporting (such as suppressing results where there are less than a certain number of responses).
2. Risk Management Protocols
Establish clear procedures for handling "adverse event" verbatims, such as comments that identify reportable conduct (harassment, discrimination, safety violations). This requires:
- Defined thresholds for what triggers reporting
- Pathways that preserve participant anonymity where possible
- Appropriate management of respondent welfare (such as links to resources or EAP services)
- Clear communication to participants about the limits of confidentiality
3. Data Management Rigour
Minimising risk in these scenarios means having clear and controlled approaches to data management. Survey designers need to consider:
- Anonymisation: What identifiers are collected? Are they necessary? Can they be removed before analysis?
- Linkage: Will responses be linked to HR records, performance data, or previous surveys? What are the reidentification risks and management strategies for this?
- Longitudinal analysis: Does tracking individuals over time serve the organisational purpose or just create privacy risks?
- Data custodianship: Who has access? For how long? What happens when staff leave?
4. Aggregation and Monitoring Standards
Set and enforce minimum cell sizes for disaggregated reporting. A common standard is if there are five or fewer respondents in a group then the group data are not published. There's also the question of monitoring response rates. Consider the purpose of gathering this information, and how that information is then used. For example, dashboard displays showing real-time response rates by department can cross the line from QA into surveillance, especially where teams or groups may be singled out for low response rates and pressured into participation.
5. Distress Management
Asking about workplace culture, mental health (even with validated instruments like the K10 or PWI), or job satisfaction can potentially trigger distress. This is especially the case where there has been significant upheaval in an organisation such as a restructure. Survey designers must build in:
- Prominent reminders of EAP and wellbeing services
- Anonymous reporting channels separate from the survey itself
- Third-party contact request mechanisms that don't link the request to survey responses
6. Methodological Alternatives
Whole of organisation surveys may not be the best way of checking the pulse of an organisation. Consider using less intrusive methods for some metrics. One example is to use satisfaction buttons in an unsurveilled part of the office, such as workplace bathrooms. This can provide rapid insights with greater anonymity, but with the trade-off of lower precision. This approach is not suitable for everything, but by thinking about the research goal, you can identify the most appropriate method to use that protects respondents.
7. Cultural Alignment
Thinking about the research goal also includes thinking about organisational strategy and culture. One approach to delivering high quality research on organisational culture is to link survey questions explicitly to organisational values. For example, we can create a question using Iris Ethics' value of Innovation:
My organisation provides opportunities for me to innovate in how I do my work. [Strongly Disagree; Disagree; Agree; Strongly Agree; Don't Know; Prefer not to answer]
This approach provides legitimacy for the inquiry and context for interpreting results. But designers need to be honest about biases and how they influence responses. For example, self-perception differs from perception of others (sometimes it's appropriate to try and measure both), and most surveys capture stated attitudes rather than behaviour.
Government Surveys: Additional Considerations
Government employee surveys such as the Australian Public Service State of the Service report and associated census carry additional complexity:
- Results are often published, creating heightened reidentification risks for small agencies or specialised roles.
- Results may be used in parliamentary scrutiny, media reporting, or machinery-of-government debates. This amplifies consequences of participation.
- Comparative reporting between agencies creates pressure on agency heads to achieve favourable results, potentially translating into pressure on staff.
These factors don't preclude such surveys, but their presence demands more rigorous ethical review and stronger safeguards.
Our Approach: Proportionate, Ongoing Review
At Iris Ethics, we treat employee surveys, even those conducted regularly, as requiring ethical review. But we've designed a process that's proportionate to risk, operationally practical, and minimises cost.
For surveys administered on a regular cadence (such as monthly pulse surveys, annual climate surveys), we can provide approval on an ongoing basis through our Lower Risk pathway. We enable this through three key activities:
- Initial review: We provide a comprehensive assessment of methodology, consent processes, data management, and risk mitigation strategies to ensure risks of discomfort and harm are effectively mitigated, before providing standing approval for multiple rounds of survey delivery.
- Minor amendments: Over time, surveys may need modification as new strategic questions arise. Instead of requiring a new review process, we treat updates to survey instruments (such as additional question sets and modified scales) as minor amendments with streamlined review and no additional cost.
- Annual updates: To comply with NHMRC requirements, researchers provide brief annual reports confirming ongoing compliance with approved protocols
This approach recognises that organisational surveys provide strategic inputs, but balances the need to both accurately reflect reality and avoid negatively impacting culture through the survey process itself. That requires ethical oversight, but not obstruction.
Summary
Employee surveys sit at the intersection of quality assurance and research. They use research methods, generate data about people in unequal power relationships, and can carry foreseeable risks of psychological and social harm.
The fact that many organisations wouldn't think of these as needing ethical review reflects a blind spot, not a justified practice. The National Statement and the associated "Ethical considerations in quality assurance and evaluation activities" document provide clear guidance: activities with the characteristics described here almost always require some form of ethical review.
For evaluators, social researchers, and market researchers conducting this work: you have both professional and ethical obligations to ensure appropriate oversight. That means applying the principles of the National Statement to protect the people your surveys seek to understand.
*Unfortunately there is no agreed definition on "standard" protocols; however, we interpret this to mean validated instruments such as the Personal Wellbeing Index, or protocols/questions that are well established in relevant industries with standardised language, such as the Net Promoter Score in market research. In these cases, the language used is consistent between uses, and ethical risks have either been characterised and mitigated in the design of the instrument, or the language of the instrument is such that risks of discomfort or harm are not foreseeable.
**We'll be preparing a separate deep dive on this chapter of the National Statement in the near future.
AI Disclosure: Initial drafts of the content for this article were prepared using Large Language Models with input from Iris Ethics staff who guided the scope and design. Subsequent revisions and final versions were developed and approved by Iris Ethics staff.