Use of Generative AI in Doctoral Dissertations
Source: https://www.graduateacademy.uni-heidelberg.de/promovieren/ai.html Parent: https://www.graduateacademy.uni-heidelberg.de/promovieren/index_en.html
Universität > Einrichtungen > Graduiertenakademie > Promovieren in Heidelberg >
Use of Generative AI in Doctoral Dissertations - Embracing Innovation with Integrity
The rapid development of generative AI tools, such as large language models (LLMs), presents new opportunities for scientific research worldwide. At the same time the integration of such technologies also entails significant risks, particularly concerning the accuracy and reliability of content.
Heidelberg University recognizes the transformative potential of generative AI tools in advancing research, e.g. for analyzing data, programming and academic writing purposes, and therefore encourages their thoughtful, ethical and legally compliant integration into scholarly endeavors, provided that their use adheres to the highest standards of academic integrity and transparency.
The following guidelines serve as a university-wide framework, establishing baseline expectations for the responsible use of generative AI across all faculties. Individual departments may develop additional, discipline-specific guidelines to address unique disciplinary needs and contexts. Doctoral candidates who choose to utilize AI tools in the preparation of their dissertation are required to please observe the following essential principles:
Transparency and Disclosure
- The use of AI tools must be explicitly disclosed, specifying when, where, why, and to what extent they were employed in the preparation of the dissertation.
- This applies to all forms of (substantial) assistance, including text generation, editing, translation, idea development, or programming support.
- The specific tools used (e.g., ChatGPT, GitHub Copilot, etc.) and the purpose of their application must be clearly stated.
- It is generally preferable to use AI tools provided by universities and adhering to the highest levels of data integrity such as YoKI or Academic Cloud.
- Prompts must be anonymized to prevent disclosure of sensitive data, unless the AI tool is run on university-internal servers such as YoKi for Heidelberg University.
- AI-generated content must be identified and distinguishable from the author’s own work.
- Disclosures must be integrated into the dissertation's statutory declaration (affidavit), in which the candidate confirms that the work was completed independently and that all resources have been properly acknowledged.
Academic Autonomy and Integrity
- The doctoral researcher must carefully review and verify all content produced or edited with the help of AI tools.
- AI systems do not assess factual accuracy or scientific validity; this responsibility rests entirely with the author.
- AI tools must be used as assistive instruments, not as substitutes for independent intellectual work, critical thinking, or scholarly judgment.
Correct Referencing and Source Validation
- All claims and statements in the dissertation must be supported by verifiable academic sources, cited in accordance with academic conventions.
- AI systems are not considered valid sources or authors and must not be cited as such.
- Candidates must ensure that all references used — including those suggested or generated by AI — are accurate, authentic, and appropriate to the academic context.
- AI-generated references are frequently fabricated or erroneous and must be individually verified.
Compliance with Dissertation Regulations
- The use of AI tools must fully comply with the university’s doctoral regulations, including all policies on plagiarism and academic authorship.
- Academic work must represent the doctoral candidate’s own intellectual contribution. The uncredited use of AI-generated material may constitute a violation of academic integrity.
- Any use of AI must be consistent with Heidelberg University’s guidelines on good scientific practice, authorship, and source attribution.
Authorial Responsibility
- The doctoral researcher bears full responsibility for the content, structure, and scientific integrity of the dissertation, including all material generated or modified with the help of AI tools.
- Any errors, omissions, or violations resulting from the use of AI are the sole responsibility of the author.
- The candidate is accountable for ensuring that the final work meets the standards of accuracy, originality, transparency, and ethical conduct expected in doctoral research.
Useful links on the topic
- Universität Heidelberg: Künstliche Intelligenz in der Lehre: https://www.heiskills.uni-heidelberg.de/en/node/1891\
- AI and ChatGPT in Science and the Humanities – DFG Formulates Guidelines for Dealing with Generative Models for Text and Image Creation https://www.dfg.de/en/service/press/press-releases/2023/press-release-no-39\
- Renommierter britischer Verlag regelt Umgang mit ChatGPT; Forschung& Lehre 03/2023.\ https://www.forschung-und-lehre.de/zeitfragen/renommierter-britischer-verlag-regelt-umgang-mit-chatgpt-5473\
- Rechtsgutachten klärt Umgang mit ChatGPT an Hochschulen; Forschung& Lehre 03/2023.\ https://www.forschung-und-lehre.de/recht/rechtsgutachten-klaert-umgang-mit-chatgpt-an-hochschulen-5457
- KI-Verstehen - Podcast des Deutschlandfunks von Moritz Metz und Carina Schroeder | 10. Juli 2025, 09:00 Uhr\ https://www.deutschlandfunk.de/ki-privatsphaere-datenschutz-chatgpt-100.html
- Explore more – AI for everyone.\ https://www.cam.ac.uk/ai#section-Explore-more-3bIazytIdK
- KI as seen by KI (PDF) 😜\
Contact and Further Information
\ For questions or guidance on integrating AI tools into your academic work, please contact your faculty's graduate office or the university's research integrity office.\ \ This webpage is designed to be adaptable across various disciplines within the University of Heidelberg, providing a unified approach to the responsible use of generative AI in academic settings. It has been setup with the help of ChatGPT and was inspired by a comprehensive benchmark on information regarding the responsible use of AI in the academic context, especially by LMU Munich and Cambridge University Press. \
Verantwortlich: Claudia Falk
Letzte Änderung: 22.10.2025