Our AI policy
At Iris Ethics, we want to ensure that we are capable of responding to changes and trends in the evaluation, social research, and market research sectors. We also want to ensure that our services are delivered ethically as well as efficiently to support stakeholders. The growth of artificial intelligence (AI) tools is one such change in the sector, and we want to be proactive in supporting our clients and their stakeholders to use such tools ethically and responsibly. This also means setting an example in how we assess and apply such tools in our own practice. This policy sets out how we do this, and is designed to reflect national and international standards.
This policy will be refined and enhanced as new approaches and policies are implemented. This policy is also designed to be "tool agnostic", that is, the policies and procedures set out here should be applicable to any AI tool or approach.
What is AI?
Artificial intelligence (AI) tools are software applications that have been trained on data to produce outputs based on new inputs (text/numbers, audio, photo etc.). Some of these tools are designed to generate natural language responses to a given prompt or question. Large language models (LLMs), such as early forms of ChatGPT, Claude and Microsoft Copilot, are capable of understanding natural language and context, which allows them to generate outputs of their own. Large reasoning models (LRMs) such as Deepseek-R1 extend these LLMs to incorporate feedback loops that mimic logical thought processes. Small language models (SLMs) are compressed versions of LLMs and LRMs that can be run locally on a desktop computer or handheld device. Agents are applications of LLMs, SLMs, and LRMs that have the capability to take autonomous actions in response to inputs, such as running code or posting information to a website.
However, these are only four types of AI in common use. Other AI tools include:
- image and video generators/editors
- automated transcription, translation and captioning
- recommendation algorithms
- data matching and cleaning systems
- automated compliance and classification systems
Iris Ethics' AI Position
To guide our use of AI within Iris Ethics, we have developed a position statement that aligns with our vision, mission, and principles:
"We assess and where appropriate apply AI in supporting human-led delivery of robust and ethical oversight for evaluators, social researchers, and market researchers."
This position reinforces the fact that our ethical services are human-led, and that any use of AI must be considered in light of the value it adds to supporting our clients and their stakeholders, and applied responsibly and ethically.
Our AI Principles
To support this position, we have overarching principles that guide the assessment and application of AI in our practices:
- We are transparent in our responsible usage of AI. That includes publishing, updating, and communicating this policy to stakeholders.
- We uphold the rights to privacy and confidentiality of client and project information provided as part of applications and amendments. Specifically:
- We do not use client-confidential information (including the contents of applications or submissions) with tools that use information to train AI models or that retains or stores any data outside of Australia (regardless of encryption).
- The use of any AI tool must be approved by the Managing Director of Iris Ethics and in accordance with this policy and related procedures for assessing, auditing, and monitoring tool usage.
- We do not use AI tools as a substitute for human decision-making in ethical review processes. This position is consistent with the NHMRC's Policy on Use of Generative Artificial Intelligence in Grant Applications and Peer Review 2023, which forbids the use of generative AI (including but not limited to LLMs, LRMs, and SLMs) to assist peer reviewers in the assessment of applications.
Alignment with National and International Standards
Our AI policy has been designed with reference to the Australian Government’s Voluntary AI Safety Standard, ISO 42001:2023, and the Australian Privacy Principles. It also ensures that actions are consistent with relevant professional codes, including the Australian Evaluation Society Code of Ethical Conduct, the Research Society Code of Professional Practice, and the NHMRC National Statement on Ethical Conduct in Human Research.
Specifically, our AI policy is built around compliance with the 10 guardrails of the Voluntary AI Safety Standard:
Guardrail | How we comply with the guardrail |
Establish, implement, and publish an accountability process including governance, internal capability and a strategy for regulatory compliance. | This policy is the publication of our accountability process in accordance with the standard. Governance of the policy comes from the Managing Director, Secretariat and Chair, advised by members of the HREC. Our internal capability includes the expertise in the Secretariat on AI policy, supported by the development of training for HREC members on the responsible and ethical use of AI in research and evaluation. |
Establish and implement a risk management process to identify and mitigate risks. | Risks relating to the adoption and application of AI tools within the firm are incorporated into our whole-of-company risk register, which also includes information on the process for identification and mitigation of risks and tracking of risk events and responses. This ensures that AI risks are not overlooked when considering broader organisational risks. |
Protect AI systems, and implement data governance measures to manage data quality and provenance. | Our assessment and adoption of any AI tool is documented and governed as part of our IT and data systems governance processes. All IT systems require multi-factor authentication for access, and access to information on applications is strictly limited to the Secretariat and members of the panel for that application. Our algorithmic systems for data quality management are only applied to administrative information (not the content of applications), and require human validation before actions are taken (such as deduplication or deletion). |
Test AI models and systems to evaluate model performance and monitor the system once deployed. | All AI applications are subject to a process of testing and validation against human-only procedures to confirm performance and accuracy. Deployed systems are subject to ongoing audit to ensure accuracy and completeness. |
Enable human control or intervention in an AI system to achieve meaningful human oversight. | All processes implementing AI include a human oversight role to verify and validate outputs from AI systems prior to integration of outputs. |
Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content. | This policy is designed to inform all end-users and stakeholders of Iris Ethics' position and activities relating to the use of AI within the company. Interactions between stakeholders and AI or AI generated content provided by Iris Ethics is clearly labelled as such, along with warnings that content may not be correct and that human validation should be applied to any outputs. |
Establish processes for people impacted by AI systems to challenge use or outcomes. | Our complaints policy makes specific reference to the impacts of AI systems and incorporates mechanisms for stakeholders to challenge the use or outcomes for both Iris Ethics as well as activities approved by its HREC. |
Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks. | This policy outlines our use of AI and provides information to stakeholders to inform their own risk management in relation to AI in their supply chain. We also stay informed about the development and deployment of AI tools by the companies in our supply chain to assess implications for our operations. |
Keep and maintain records to allow third parties to assess compliance with guardrails. | We have an audit trail in place for all AI systems, which keeps records of the systems used, along with the inputs, outputs, and instructions to these systems to ensure that compliance with guardrails can be effectively assessed. |
Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness. | Iris Ethics undertakes proactive engagement with stakeholders on AI usage and transparency, and is developing strategic guidance on the application of AI in evaluation, social research, and market research contexts. |
AI Policy FAQ
We've provided a few answers to questions you may have about our use of AI within Iris Ethics or by applicants. If there's a question that isn't answered, please feel free to contact us.
This document sets out our policy for the use of AI. In short, we do not use AI as a substitute for human decision making in ethical review processes. Where we do use AI, there is a process for testing and approving use cases, and record keeping to ensure that outputs are correct and verified by a human. Moreover, any use of AI must be of benefit to stakeholders as well as the company.
All our systems are designed to ensure privacy and confidentiality of data. This includes:
- encryption of data at rest and in transit,
- multi-factor authentication for all systems,
- limitation of access to data to the minimum number of people required,
- only using AI systems that do not use user data to train models; and
- using AI systems that are able to be integrated and/or hosted securely on our existing software platforms (Odoo and Microsoft)
No.
Regardless of the AI used, our policy is to have human verification of outputs to ensure accuracy and quality. Moreover, the use of generative AI to make decisions on the ethical acceptability of applications is explicitly forbidden.
While AI has the potential to deliver increases in efficiency and productivity, it raises significant ethical risks around:
- Accuracy and repeatability of outputs of AI systems (e.g. "hallucination" and representation of falsehoods as factual information)
- Transparency in the decision-making process of AI systems (e.g. understanding the process by which an automated decision has been reached or an output has been developed)
- Misuse of confidential and/or copyrighted information and intellectual property in the training and output of AI systems (e.g. training AI on copyrighted materials, or use of personal information to train new AI systems)
- Inherent biases in the underlying training data that may negatively impact marginalised communities or produce discriminatory outputs (e.g. gender biases, culturally insensitive practices)
These ethical risks must be understood and managed effectively both within Iris Ethics as well as within the projects that it reviews.
Our HREC includes members that have direct experience in the assessment, deployment and management of AI systems in evaluation and research contexts. This enables them to make an informed contribution to the review process around the risks presented by the use of AI in a project. Each project is different, and the ethical use of AI in one context does not mean that it is suitable or ethical in a different context. The role of our HREC is to help you navigate that context and identify whether the use of AI in your project is ethical.
There are a range of use cases for AI in our operations. These include:
- Transcription of interviews and meetings (both live transcription and of pre-recorded audio)
- Provision of closed-captioning in meetings and workshops
- Automation of non-review business processes
- Development of draft content for resources or communications
- Notetaking, summarisation and action item generation from meeting transcripts
In all cases, Iris Ethics employs human verification of the outputs of AI tools to ensure that they accurately represent the information provided. Moreover, tools used are limited to those approved as being secure and suitable for the task.
Applicants may also use AI in other ways (as well as the ways where Iris Ethics uses them), depending on the project:
- Logic model, stakeholder/journey mapping or theory of change generation
- Risk assessment and analysis for proposed activities
- Review and summarisation of research literature
- Identification and generation of relevant content for draft research and evaluation plans
- Content generation and/or summarisation for draft project outputs (data collection tools, consent forms, reports)
- Synthetic data generation
- AI-led interviewing (e.g. using chatbots)
- Quantitative analysis programming code generation (e.g. in R, Python or SPSS)
- Qualitative data analysis (thematic coding of text)
- Reviewing and proofing of deliverables
This list is not exhaustive and will differ depending on the circumstances. In all cases, Iris Ethics considers the use of these tools in context and against the NHMRC National Statement on Ethical Conduct in Human Research to ensure that their use does not present unacceptable risks to stakeholders and is consistent with the principles of research merit and integrity, justice, beneficence, and respect. We also, where required, consider applicants' own AI policies to ensure these requirements are met.
Our complaints policy outlines the process for contacting us if you have concerns about the use of AI in a project or about how we use AI as a company.
Contact Us
For any inquiries, requests, or complaints regarding this AI Policy, please contact us at:
Email: info@irisethics.com.au
Updates to this Policy
We may update this AI Policy from time to time to reflect changes in our practices or regulatory requirements, as well as in response to feedback from stakeholders. The latest version will be available on our website. We encourage individuals to review this policy periodically to stay informed about how we apply AI in our practice.
Policy last updated: 24 June 2025