Skip to Content

Tool: AI Transparency Checklist

Incorporating AI ethically into your evaluation and research projects.

The promise of artificial intelligence in research and evaluation is undeniable. AI tools can transcribe interviews in minutes, code thousands of open-ended responses overnight, and generate compelling visualisations from complex datasets. Yet this efficiency comes with a fundamental responsibility: ensuring that participants, clients, and other stakeholders understand when, where, and how AI is being used throughout the research process.

In our work reviewing research applications and supporting evaluators, we've observed a concerning gap. While many are thoughtfully integrating AI into their practice, the mechanisms for documenting and disclosing this use sometimes lag behind, creating issues in the review process. Consent forms may be silent on AI transcription. Survey participants may encounter chatbots without clear identification. Analysis sections in reports don't distinguish between human-led and AI-assisted insights. This isn't necessarily intentional, it's often that researchers and evaluators lack a systematic framework for thinking through AI transparency at each stage of the research lifecycle.   

That's why we've developed a practical tool to address this gap: the AI Transparency Checklist for Research and Evaluation. This resource supports researchers and evaluators in documenting AI use and ensuring appropriate disclosure to all stakeholders, from research design through to dissemination.

Introduction

Transparency in AI use isn't just good practice—it's an ethical imperative grounded in the core values of the National Statement on Ethical Conduct in Human Research. The principle of respect requires that participation be based on sufficient information and adequate understanding of both the proposed research and the implications of participation in it. When AI is woven into data collection, analysis, or reporting, that information must include clear disclosure of AI's role.

Yet determining what constitutes "sufficient information" about AI use is not straightforward. The landscape is complex and evolving. Consider just a few scenarios that researchers and evaluators now routinely encounter:

  • An AI transcription service processes recorded interviews. Is this simply a more efficient version of human transcription, or does it require specific consent because the audio data is processed by algorithms that might retain or learn from it? What if the service is hosted overseas? To what extent do non-verbal elements of an interview (emotional tone, silences, hesitation markers ("ummm"), emphasis on certain words) get captured or removed?
  • A chatbot conducts initial screening questions for a survey. Participants may assume they're interacting with a person. At what point must this be disclosed? What happens if the chatbot generates an inappropriate response that causes distress?
  • Large language models assist in coding qualitative data. The AI identifies themes across hundreds of interview transcripts. But how do you explain this to participants in consent materials? How do you ensure the AI hasn't hallucinated patterns that don't exist in the data?
  • Generative AI creates data visualisations for a final report. The images are compelling and accurately represent the data. But if they're AI-generated, do they need to be labelled as such? What if the client's intellectual property was used in the prompts?

These aren't hypothetical edge cases. They represent everyday applications of AI in contemporary research and evaluation practice, and active topics of research that members of our organisation have presented on at international conferences. Each raises questions about transparency, consent, data security, bias, and quality control. There are also no simple right or wrong answers to these questions. The challenge for researchers is navigating these questions systematically, ensuring nothing falls through the cracks.

The challenge of AI transparency in practice

The difficulty of ensuring AI transparency in research and evaluation stems from several factors:

AI is pervasive but often invisible. 

Many researchers and evaluators already use AI tools without fully recognising them as such. Automated transcription feels like a utility. Sentiment analysis seems like a straightforward computational task. Data mining and predictive algorithms are long-established methods for quantitative analysis. But these are all AI applications that process participant data, and they carry specific risks and limitations that should be disclosed.

The research and evaluation lifecycle is complex. 

AI might be used at any stage, from developing survey questions to analysing data to generating report visuals. Each stage has different transparency requirements, different stakeholders who need to be informed, and different ethical considerations. Keeping track of all this across multi-method, multi-stakeholder projects becomes exponentially challenging.

Standards are still emerging. 

While the National Statement provides enduring principles, specific guidance on AI transparency is still developing. The Research Society's AI Guidelines, released in 2025, offer detailed recommendations for the market and social research sectors. The Australian Government's Voluntary AI Safety Standard provides a framework for responsible AI deployment in organisations. But translating these high-level principles into concrete practices for individual projects requires judgment and intentionality.

There's a tension between transparency and accessibility. 

Researchers and evaluators must provide sufficient information without overwhelming participants. A consent form that includes exhaustive technical details about every AI tool risks becoming incomprehensible, and may impede participation even where a stakeholder is willing to do so. But omitting this information entirely fails to respect participants' right to understand what they're consenting to. Finding the right balance is challenging.

These challenges have ethical implications. 

When researchers fail to disclose AI use appropriately, they risk violating the principle of informed consent. Participants cannot make truly voluntary decisions about participation if they don't understand how their data will be handled. This is particularly concerning for populations who may face heightened risks from AI systems that weren't designed with their needs, circumstances, and cultural values in mind. 

Insufficient transparency also undermines research integrity. 

If the role of AI in analysis isn't clearly documented, how can peers evaluate whether conclusions are justified by the data? How can clients distinguish between insights generated from their specific data versus generic patterns the AI learned from its training data? How do we maintain accountability when AI operates as a potentially invisible intermediary between raw data and reported findings?

A systematic approach to AI transparency

The AI Transparency Checklist provides a structured framework for ensuring appropriate disclosure and documentation throughout the research lifecycle. It's organised around six key phases, each with specific considerations for AI use.

Research/evaluation design and planning phase

Transparency begins before any data is collected. At the design stage, researchers and evaluators should identify all intended uses of AI, document the specific tools to be used, and assess the risks associated with each application. This includes disclosing AI use to ethical review bodies and ensuring compliance with institutional or organisational AI policies.

The checklist prompts researchers and evaluators to consider whether there are alternatives to AI use, and to define the extent of human oversight for each AI application. This is particularly important given that the National Statement requires researchers and evaluators to design studies that minimise risks to participants. If AI introduces new risks through potential bias, hallucination, or data security vulnerabilities these must be identified and mitigated from the outset.

This phase also requires establishing governance arrangements. If client data will be processed by AI tools, contractual arrangements should address data ownership, processing locations, and consent requirements. If the research involves populations who may be at greater risk due to the use of AI, additional safeguards may need to be documented and approved. 

Participant information and consent

Perhaps the most critical aspect of AI transparency is ensuring participants understand how AI will be used before they consent to participate. The checklist provides detailed guidance on what should be disclosed in consent materials.

At a minimum, participants should know where AI will be deployed (e.g., "AI will transcribe your interview and assist in identifying themes across all interviews"). The disclosure should use plain language and explain AI's specific role, not just that "AI may be used." Importantly, it should also acknowledge limitations—that AI transcription might contain errors, that sentiment analysis is probabilistic rather than definitive, that visualisations may be AI-generated. Disclosing where possible the elements of human oversight in managing these limitations is recommended.

Where personally identifiable information will be processed by AI, explicit consent should be obtained. This is particularly important for audio and video recordings, which contain inherently identifiable information. Participants have a right to know whether this data will be processed overseas, whether it might be used for secondary purposes like AI model training, and how long it will be retained.

The checklist also emphasises participant rights. Participants should understand their right to opt out of AI-mediated interactions, to escalate concerns to human oversight, and to withdraw their data (noting that withdrawal may not always be possible once AI processing has occurred). Providing a link to the organisation's AI policy in consent materials offers transparency about how AI is governed more broadly.

Data collection phase

When AI is used during data collection, whether through chatbots, AI-driven sampling, or AI-generated survey questions, additional transparency measures are required. Participants should be informed at the point of interaction when they're engaging with AI, not just in advance consent materials.

For AI-mediated interactions like chatbots, this means clear identification ("I am an AI assistant") and readily available mechanisms to escalate to human support. The Research Society's AI Guidelines emphasise that participants must never be misled into believing they're engaging with a human when AI is involved. This isn't just about disclosure, it's about respect for participants' autonomy and their ability to make informed decisions about how they engage with research and evaluation. 

The checklist also addresses data security considerations that become particularly salient during collection. When AI tools process participant data in real-time, researchers and evaluators must ensure appropriate encryption, access controls, and breach response protocols are in place. If third-party AI platforms are used, they should be vetted for data security compliance before any participant data is collected.

For AI-generated survey content, validation is essential. AI can be verbose and may generate culturally inappropriate or factually inaccurate questions. The checklist prompts researchers to verify that AI-generated content has been reviewed by qualified researchers for appropriateness, accuracy, and cultural sensitivity.

Data analysis phase

The analysis phase presents particularly complex transparency challenges. AI might be used for transcription, thematic coding, sentiment analysis, data visualisation, or generation of synthetic data. Each application has specific risks and requires different transparency and quality control measures.

A fundamental principle is that all AI tools used in analysis must be documented, along with their role and limitations. This documentation should create an audit trail linking AI-generated insights to source data. For example, if an AI claims that participants expressed frustration about a particular issue, researchers must be able to verify this finding in the actual data rather than accepting it as AI-generated truth.

The checklist emphasises human validation as essential. AI-generated summaries should be checked against source data. Sentiment analysis outputs should be reviewed, especially for sensitive topics where context matters and generic AI models may misinterpret idioms, irony, or cultural expressions. Thematic coding by AI should be validated by human coders who can assess whether the identified themes genuinely reflect the data. Fortunately, existing methods of validation such as inter-rater reliability analysis can be readily extended to these scenarios.

Bias mitigation is another critical consideration in analysis. AI tools should, where possible be assessed for potential bias in their training data, with known biases documented and addressed. This is particularly important in the Australian context, where AI models trained primarily on North American or European data may not appropriately handle multicultural communities, regional distinctions, or Indigenous perspectives.

The checklist also addresses a specific risk: AI hallucination. Large language models are known to occasionally fabricate information, presenting it confidently as fact. When AI is used to summarise or analyse data, processes must be in place to detect fabricated quotes, invented patterns, or inferred content that doesn't exist in the source data. This requires human oversight by people who are familiar with the data and can identify when AI outputs don't align with reality.

Reporting and dissemination phase

Transparency doesn't end when analysis is complete. Reports and other dissemination materials should clearly disclose which aspects were AI-assisted or AI-generated. This is important for research integrity; readers need to understand the provenance of insights to evaluate their credibility.

For AI-generated imagery or visualisations, clear labelling is essential. While these may be compelling and accurate, readers should know they're viewing an AI-generated representation rather than, for example, a photograph or human-created illustration. If visualisations involve any extrapolation or inference beyond what the data directly supports, appropriate disclaimers should be included.

The checklist also addresses a subtle but important distinction: reports should differentiate between insights that come from the research data versus insights that may come from the AI's training data. Large language models are trained on vast amounts of text and can answer questions generically from this training rather than from the specific dataset provided. When AI assists in analysis, researchers must ensure that reported findings genuinely emerge from their research rather than being generic AI-generated content.

Authorship and attribution present complex questions when AI is involved. While Australian copyright law generally requires significant human contribution for copyright protection, the extent of human versus AI contribution should be documented. The Iris Ethics AI Policy, like many organisational policies, requires clear acknowledgment of AI assistance in developing content. This isn't about legal requirements alone, it's about intellectual honesty.

Ongoing monitoring and review

AI transparency is not a one-time exercise but an ongoing responsibility throughout the research lifecycle and beyond. The checklist's final section addresses documentation, audit, stakeholder communication, and compliance monitoring.

Records of AI tool usage should be maintained, including where feasible logs of model versions and configurations. This enables reproducibility and accountability; if questions arise about findings, researchers should be able to trace exactly what AI tools were used and how they were configured.

Stakeholder communication mechanisms should ensure that participants can access information about how AI was used in the research they participated in. If concerns or complaints about AI use arise, there should be clear processes for tracking and addressing them. Ethics review bodies should be notified of any significant AI-related issues, particularly if risks materialise that weren't fully anticipated during initial review.

The checklist itself is designed as a living document. As projects evolve, new uses of AI may be identified or risks may emerge that require additional transparency measures. Regular updates to the checklist ensure it remains aligned with the actual conduct of the research.

Adapting the checklist to context

The AI Transparency Checklist is designed to be comprehensive, but not every item will apply to every project. A small qualitative study using AI transcription has different transparency requirements than a large-scale mixed-methods evaluation deploying chatbots for data collection and large language models for analysis.

Researchers and evaluators should use professional judgment in determining which checklist items are relevant to their specific context. Items may be marked as "Not Applicable" when they don't align with the research design. However, this determination should be deliberate and documented, not simply overlooked.

The scale and risk of AI use should inform the level of detail in documentation and disclosure. For low-risk applications, like using AI to create a simple bar chart from quantitative data, minimal disclosure may suffice. For high-risk applications such as using AI to conduct interviews with vulnerable participants, extensive documentation, pilot testing, real-time monitoring, and detailed consent processes are warranted.

The checklist should be viewed as a starting point that can be adapted and extended. Organisations may wish to add items specific to their sector or regulatory context. Individual projects may identify additional transparency measures appropriate to their circumstances. The goal is not rigid compliance with a prescribed list, but thoughtful, systematic attention to transparency throughout the research lifecycle. There are no easy answers and each project will present a different set of considerations.

Integration with existing practice

The AI Transparency Checklist is designed to complement, not replace, existing research and evaluation tools and processes. It sits naturally alongside instruments like our Data Collection Matrix in supporting intentional, ethical research design.

For researchers and evaluators already using structured approaches to research design, the checklist can be integrated into existing workflows. AI transparency considerations can be incorporated into ethics applications, research protocols, and evaluation frameworks. The checklist essentially extends good research and evaluation practice to explicitly address AI use. 

For ethics review bodies, the checklist provides a framework for assessing whether researchers and evaluators have adequately considered AI transparency. Rather than ethics committees needing to independently identify every instance where AI might be used, the checklist enables researchers to systematically document this information, making review more efficient and effective.

For organisations developing their own AI policies, the checklist offers a practical implementation tool. Policies typically articulate principles and high-level requirements; the checklist translates these into concrete actions that researchers can take at each stage of a project.

Looking forward

AI in research and evaluation is not going away. If anything, it will become more pervasive and more sophisticated. Large language models will continue to improve. New AI applications for research and evaluation will emerge. The boundary between AI-assisted and human-led work will become increasingly blurred.

This makes systematic approaches to AI transparency more important, not less. As AI becomes ubiquitous, the temptation will grow to treat it as unremarkable just another tool in the toolkit that doesn't require special attention. But this would be a mistake.

AI introduces distinctive risks and ethical considerations that demand ongoing vigilance. Transparency isn't just about disclosure for its own sake. It's the mechanism by which we maintain accountability, enable informed consent, preserve research integrity, and ultimately ensure that the efficiency gains AI offers don't come at the cost of ethical standards.

The AI Transparency Checklist is offered in this spirit. It is a tool to support the research and evaluation community in navigating this complex and evolving landscape. We recognise that transparency requirements will continue to develop as AI capabilities expand and as regulatory and professional standards mature. The checklist itself will need to evolve.

We welcome feedback from researchers, evaluators, ethics review bodies, and participants on how the checklist works in practice and how it might be improved. Like all good research tools, it should be subject to iterative refinement based on real-world application.

Getting started

The AI Transparency Checklist is available for download here. 

Conclusion

Artificial intelligence offers tremendous potential to enhance research and evaluation, making it more efficient, more scalable, and in some cases more insightful. But this potential can only be realised ethically when AI use is transparent to all stakeholders.

The AI Transparency Checklist supports this transparency by providing a systematic framework for documenting and disclosing AI use throughout the research lifecycle. From initial design through to dissemination, the checklist prompts consideration of what information needs to be communicated, to whom, and how. AI transparency isn't an additional burden imposed by regulation, but a fundamental expression of respect for research participants and commitment to research integrity.

In an era where participants are rightly demanding that their data be used responsibly, and where trust in institutions depends on transparency and accountability, systematic attention to AI disclosure isn't just good practice. It's an ethical imperative.

 

AI Disclosure:  Initial drafts of the content for this article and tool were prepared using Large Language Models with input from Iris Ethics staff who guided the scope and design. Subsequent revisions and final versions were developed and approved by Iris Ethics staff.