Why you shouldn’t use ChatGPT for your research

ChatGPT is a state-of-the-art large language model that employs a generative approach, enabling it to generate human-like text based on given prompts. With its vast training data and extensive language understanding, ChatGPT has become a valuable tool for various applications; however, its capabilities are certainly not impressive when utilising it for research and academic writing purposes. It is imperative for researchers to exercise caution in relying on ChatGPT’s outputs within their academic work and papers. In this article we will discuss what will happen if you use ChatGPT in your research.

On the surface, ChatGPT appears well-suited for a wide variety of research applications due to its ability to summarise, explain concepts, answer queries, and generate original content. However, there are risks associated with relying on ChatGPT for research and academic writing.

Risk 1: Lack of domain-specific expertise

While ChatGPT has been trained on a huge corpus of text data, it lacks genuine subject matter expertise. This becomes evident when prompting it in specialised domains outside its core competencies. Information can be oversimplified, important nuances overlooked, and key gaps in knowledge exist. Each project may present unique considerations that demand careful evaluation when using ChatGPT.

Risk 2: Incomplete and potentially inaccurate information

Occasionally, ChatGPT will 'hallucinate' responses that seem plausible but are factually incorrect or unsupported. Without the capability to truly comprehend context and facts, outputs cannot always be taken at face value.

Risk 3: Limited context understanding

Despite advanced natural language capabilities, ChatGPT does not genuinely grasp contextual meaning or the wider implications of the information it provides. Subtle cues and intended semantics are often missed.

Risk 4: Variation based on framing

How queries and prompts are framed to ChatGPT can significantly impact the nature of the responses generated. Leading questions or insufficient context risks less useful information.

Risk 5: Potential for biases

Inherently, the training data and past human interactions used to develop ChatGPT imbue certain biases around things like race, gender, culture, etc. While mitigation steps are taken, biases may be reflected in edge case responses.

Best Practices for Researchers When Using ChatGPT

1. Verify Information Independently

Researchers should adopt a critical approach and independently verify information generated by ChatGPT. Cross-referencing with reliable sources ensures the accuracy and reliability of the data used in their research.

2. Follow Submission Guidelines Related to AI

As the use of generative AI continues to increase, institutions and journals are increasingly incorporating AI-related guidelines and updates into manuscript submissions. Researchers are advised to adhere to specific formatting and ethical considerations when utilising AI tools in their studies. This ensures transparency, reproducibility, and ethical use of AI methodologies. Additionally, staying informed about these evolving guidelines becomes crucial for scholars seeking to contribute responsibly to the academic discourse.

3. Exercise Caution in Interpretation

Interpreting responses from ChatGPT requires caution. Researchers should be aware of potential limitations in the model's understanding and not solely rely on its outputs without careful interpretation.

4. Avoid Over-Reliance on ChatGPT-Generated Content

Researchers should not rely solely on its generated content. Thorough review and integration with information from other sources are essential for robust and comprehensive research.

5. Beware of Bias

Researchers must be proactive in identifying and addressing biases in the outputs of ChatGPT. Incorporating measures to mitigate bias is crucial to ensuring the integrity of research outcomes.

Looking ahead, generative AI will evolve tackling several challenges. However, researchers should continue to apply due diligence in verification and critically evaluate any responses received, rather than directly citing them or basing analysis on them without further confirmation from reliable sources.

By leveraging AI tools responsibly and in conjunction with other resources, researchers can make informed decisions, enhancing the quality and reliability of their research projects.



Share with your colleagues