Artificial Intelligence (AI) tools are increasingly being used to enhance research capabilities and streamline processes. This page is designed to help you learn more about AI in the research context and give you some confidence when exploring these tools.
Artificial intelligence (AI) refers to computer systems that are able to perform tasks that usually require human intelligence.
Artificial narrow intelligence (ANI)
ANI refers to systems that are programmed to perform a single task, using machine learning techniques such as supervised learning and reinforcement learning.
Examples include image recognition in self-driving cars, recommendation systems when shopping online, or text prediction apps.
Artificial general intelligence (AGI)
AGI is a theoretical type of AI that is able to perform any intellectual task in any situation, much like a human.
Machine learning refers to a range of techniques to build systems that have the ability to learn and improve from experience, without being explicitly programmed to do so.
Two common examples of machine learning are supervised learning and reinforcement learning.
Supervised learning
This is where a program is given data that has been labelled. From this, the program starts to recognise patterns, allowing it to predict or classify new information. Examples include image recognition, where programs are trained to recognise cats.
Reinforcement learning
This involves giving the program feedback each time it performs a task, so that it learns from doing things correctly.
Generative AI is a type of artificial narrow intelligence (ANI) where the program creates new content – either text or images – based on the data that is has been trained on.
A well-known example of generative AI is ChatGPT. ChatGPT has been trained on a massive amount of text from digital resources to recognise patterns in words and sentences. It provides responses to prompts by predicting the next likely word in a given context.
AI tools are increasingly used in research to assist with tasks such as idea development, literature search, data analysis and text editing. While they offer exciting possibilities, it is important to critically assess their purpose, accuracy, and relevance to ensure responsible and effective use.
Look for information provided by the developers about the tool you want to use.
Look for information about how others have used the tool.
A useful expression in computer science is “garbage in, garbage out”. Tools are only as good as the data used. If programs are trained on incomplete, inaccurate or biased data, then the output will also be incomplete, inaccurate or biased. The output is also influenced by the quality, accuracy, and thoroughness of the training that the program receives.
Look for information provided by the developers about the tool you want to use.
It can take a long time to train AI programs and a long time for new datasets to be incorporated into their training.
Does the program indicate when it was last updated and how current its dataset is? Does this impact the relevance of the tool for your requirements?
You may also consider using the ROBOT Test (Reliability, Objective, Bias, Ownership, Type) originally developed by sahervieux of the LibrAIry blog when evaluating the legitimacy of an AI tool
Although not a comprehensive list, some AI tools with applications in the research context include:
For a list of use cases of AI in the research context, you may be interested in consulting the Use of Artificial Intelligence (AI) tools in research document developed by Universities Australia.
The "CLEAR" framework to construct prompts:
Concise
Remove superfluous elements and prompt with brevity and conciseness. For example: ‘How does cellular respiration work and why is it important’ rather than ‘Can you please provide me with an in-depth breakdown of the process of cellular respiration and a comprehensive elaboration on its significance?’
Logical
The second element of the framework, logical prompts, highlights the importance of maintaining a coherent structure and sequence of ideas within a prompt. For example: ‘List the required steps in conducting a systematic review, beginning with formulating a research question and ending with a writeup of the findings.’
Explicit
This element emphasises the need to clearly define the desired output when prompting an AI model. For example, rather than prompting ‘What are some AI tools?’ you could instead be more explicit and ask ‘Identify five well-regarded AI tools and explain what their unique strengths are.’
Adaptive
Be open to experimenting with new strategies based on how the AI model responds to the initial prompt. For example, if a prompt such as ‘Describe the history of social media’ provides too much information, consider instead a more specific and adaptive prompt such as ‘Explain the development of social media sites from the late 90s to the early 2010s.’
Reflective
Regularly review and enhance your prompts so that the results are aligned with your goals and requirements.
Source: Lo, Leo, S. (2023). The CLEAR path: A Framework for Enhancing Information Literacy through Prompt Engineering. The Journal of Academic Librarianship, 49(4).
Learn more about developing effective prompts from the following resources:
The use of AI tools in research raises important questions around ethics, research integrity, copyright, and privacy.
AI tools may be trained on sources that have racial, gender and ableist biases. Researchers who incorporate AI-generated content into their research without critical assessment risk perpetuating these underlying biases. Additionally, AI tools may be trained on data created by or about Indigenous people without their permission. AI tools can generate outputs that imitate Indigenous art styles that constitute infringement of Indigenous Cultural and Intellectual Property and that do not maintain appropriate cultural sensitivity. Researchers should consult relevant frameworks such as the AIATSIS Code of Ethics and seek guidance when working with Indigenous data or cultural expressions.
While AI tools can be used to generate ideas for research topics, identify relevant sources for research and improve the grammar and readability of existing text, researchers should be wary of using GenAI to autonomously generate significant portions of any academic work. AI tools are trained on vast swathes of data and the sources AI tools are trained on are not necessarily acknowledged within the tools themselves. Researchers should therefore critically evaluate all AI outputs with the same level of discernment they would any other source. Additionally, as a researcher, you should be able to differentiate between your own contribution and that of AI and acknowledge where AI has contributed to your work.
To ensure copyright compliance and maintain data privacy, researchers should avoid uploading sensitive and copyright-protected materials to generative AI platforms that use uploaded content for model training.
Uploading copyrighted content without proper clearance may breach copyright law and violate terms of use. Copyright clearance may need to be conducted for each item before use with AI platforms that use uploaded content for model training, including text, documents, datasets, and multimedia. This may involve confirming that the licensing permits the work to be reproduced digitally or obtaining permission from the rights holder. Materials from library databases, course content, and other third-party sources are generally unsuitable for use with AI platforms that claim rights over user-submitted content.
If you're uncertain about the copyright implications of using AI tools in your research, we strongly encourage you to get in touch with the Library. Copyright in the context of AI remains a complex and evolving area, and guidance is best provided on a case-by-case basis.
AI systems can also pose privacy risks. The complex nature of many AI systems mean that it is not necessarily transparent how data uploaded into the systems is used and even developers of the AI systems may not be able to articulate how all aspects of the system work. There are also risks of data breaches, with attackers attempting to gain unauthorised access to the datasets AI models are trained on. Before using any tool, we encourage you to take a moment to look over its privacy policy.
In comparison to many GenAI tools freely available online, the University-endorsed Microsoft Copilot with Enterprise Data Protection (Copilot Chat) has a higher level of data security. Furthermore, Copilot Chat ensures that user data is not used for model training. For further details on Copilot Chat, please consult Copilot Chat or review this blog post.
Publisher policies on the use of generative AI in scholarly writing vary and may be subject to change. Before incorporating GenAI tools into their work, researchers should consult their target journal's current editorial policies to verify whether such use is permitted and understand any requirements regarding how use of GenAI should be disclosed.
Authors should also carefully read publishing contracts to see if publishers have agreements with tech companies for using works accepted for publication to train large language models.
Selection of editorial policies on use of AI:
Some grant funders have stipulations regarding the use of GenAI in the development and peer review of grant applications. It is encouraged that you familiarise yourself with their requirements before making use of GenAI tools. Be mindful of data privacy and copyright concerns before uploading content into GenAI tools in the grants context.
Both the Australian Research Council (ARC) and the National Health and Medical Research Council (NHMRC) have released policies providing guidance for researchers in relation to the use of AI tools:
ARC: Policy on Use of Generative Artificial Intelligence in the ARC’s grants programs
NHMRC: Policy on Use of Generative Artificial Intelligence in Grant Applications and Peer Review
Please consult the following documents for official Flinders policies on AI in research:
Contact the Library Research Engagement team for support
with AI and your research
![]()
Sturt Rd, Bedford Park
South Australia 5042
Ph: 1300 354 633 (Select 3)
Email: library@flinders.edu.au
CRICOS Provider: 00114A TEQSA Provider ID: PRV12097
Flinders University uses cookies to ensure website functionality, personalisation and a variety of purposes as set out in its website privacy statement. This statement explains cookies and their use by Flinders.
If you consent to the use of our cookies then please click the button below:
If you do not consent to the use of all our cookies then please click the button below. Clicking this button will result in all cookies being rejected except for those that are required for essential functionality on our website.