Generative AI in Education at Ghent University

There is no escaping nowadays: the presence of generative AI tools such as ChatGPT, Copilot, Consensus, Perplexity, etc. are ubiquitous, both within and outside the university context. But what are generative AI tools? What can they do and what can't they do (yet)? And are you allowed to use them at Ghent University? This page contains the answers to the above questions.

We try our best to keep this page up-to-date since artificial intelligence (AI) is evolving at a fast pace.

What is generative AI?

Generative AI refers to AI systems that are capable of creating new, original content based on the patterns and structures they have learnt from existing data. The tools are trained on vast tracts of data, obtained mainly from the internet.

Using advanced algorithms and neural networks, they can generate text, images, audio, videos, and computer code that can rival human generated content. This generated output is statistically close to the data it was trained on, but offer unique and customized response forms to specific inputs or instructions to the tool.

Do you want to know more about what generative AI is and how it works? Then take a look at module 1 of the Ufora learning path: Generative AI for Students – From Concepts to Creation.

What kind of generative AI systems exist?

ChatGPT is still the best-known example of a generative AI system. Similar chatbots are mushrooming each day, and are being integrated into various applications at a break neck pace. To cite a few here as examples: Copilot (from Microsoft), Gemini (from Google) and Claude. There is also a huge number of tools that help you with scientific research, academic writing, etc. Each tool or application has different strengths and limitations. So here too, the message is: stay critical.

What are the risks?

Using generative AI is not without risks. Always keep the limitations and potential ethical implications in mind while using the tools.

  • The creators are often not very transparent about their handling of the data used for training the systems. More specifically, what kind of information was entered into the systems and where did they find it? If you add on the issue of absence of sources, blindly reproducing materials generated may amount to intellectual theft. They also do not explicitly state where the information you enter may end up. So we advise to never enter privacy-sensitive information! This is even punishable under the General Data Protection Regulation (GDPR). This is also applicable to syllabi, articles, etc.. Without the author's permission: you give away those texts for free to the creators of the tools.
  •  The information you get as your output is not always correct. The answers are unreliable as the data set on which the answers are based on is limited and may have inherent biases. If there is insufficient or no data to answer a specific question, you will still get a credible answer that is not necessarily true. This is called a "hallucination" of the system, which makes it more difficult for us to assess the right from wrong of the output. Moreover, those texts with errors, fake images, etc., can take on a life of their own, which sometimes contributes to fake news.
  • The answers may contain bias, because the system is trained on potentially biased source material and because that source material is not always representative of information from around the world. For example, the answers will mainly be based on data from countries in the west.
  •  Typically, you cannot ask ethically problematic questions because of built-in safety mechanisms. However, that can be easily circumvented by changing up instructions.
  •  To weed out biases and potentially unethical responses from their systems, the owners of the companies have enlisted people (as moderators) worldwide to provide feedback on the tools' responses. The working conditions of these workers has come under scrutiny even while the companies continue to denounce these media reports.
  •  Another ethical implication of using a generative AI tool is its impact on research integrity of your project. Many tools don't provide sources for the responses generated, so you have to make your own checks on who the author is originally. It is your responsibility to ensure the quality of your research!
  •  It may seem that generative AI is on a mission to eliminate inequality. Let’s look at their claim that all students have now access to the tools and as a result hiring tutors to successfully complete certain writing tasks is no longer a privilege of the wealthier students. However, the creators of the tools appear to be offering more with the paid versions of their product. These paid versions perform much better than free versions and that may in fact exacerbate existing
  •  The ecological footprint of using these tools cannot not be underestimated. The development of the tools and their use requires enormous computing power. The data centers where the tools are trained and the data is tracked consume vast amounts of electricity and water (in order to cool the chips).
  •  An additional risk is the danger of anthropomorphism: it may seem as if the computer speaks and thinks like a human, which can cause to place more trust in the systems than is good for our well-being. In addition, there is danger that this tendency and our excessive interaction with these systems may result in a loss of human connection. It is important to understand that the programmes have learned some patterns of reasoning from texts, but lack inherent human emotional intelligence and are limited in reasoning abilities.

Are you allowed to use generative AI?

From the 2024-2025 academic year, the following guidelines will apply to assignments you complete at home:

  • responsible use of generative AI tools is permitted.
  • for other (writing) assignments, responsible use is even encouraged, in preparation for the master's thesis.
Please note: an individual teacher in a specific subject can still prohibit its use, in order to be able to check whether you have really mastered certain basic competencies. (see also: What is the point of certain competencies if you can also use an AI tool later?) For more information on this, refer to your course sheet or ask your lecturer.

The word "responsible" is crucial here. The above risks showcase how you should be careful with the tools, especially in terms of privacy, reliability and bias. In your training, you will receive tools for that responsible use.

You will have to demonstrate that you use the tools responsibly in the process, among other things. Your lecturers will, more than before, question that process, also with regard to the acquisition of specific competences. Just think of finding sources, summarizing ...: how did you go about that? Why is this a good summary? And so on. This allows teachers to assess whether you have acquired certain competencies yourself. Self-reflection plays a critical role here: you will have to keep track of the process and make your use of the tools visible, for example by means of a verbal explanation, interim (peer) feedback, surveys, etc.

What are the possibilities of using generative AI?

Of course, the tools can also assist you in several ways. For example, you can ask for extra clarification and examples for a tough topic, ask for feedback on drafts, search for scientific articles for your research, etc. Need inspiration? Check out the Ufora learning path: Generative AI – From Concepts to Creation.

Why master certain competencies when a generative AI tool can be used for it in the future??

Certain competencies, such as writing texts independently in correct language, may seem less important for now. However, these competencies are part of learning outcomes of many programmes.

For example, you cannot obtain a law degree without being able to formulate your own legal argumentation as a solution to a complex legal issue or a diploma from a language course without writing lucid texts yourself or a degree in Computer Science without being able to program yourself.

Moreover, strong writing skills require more than what such a generative AI tool can currently do. Competencies such as critical thinking, knowledge of effective communication and creativity remain necessary to assess and adjust the quality of the generated texts.

How can you sharpen your AI literacy?

Not everyone is proficient yet in working with these tools. Do you feel the need to learn more about their use? Would you like to know more about generative AI in general?

Enrol in the Ufora learning path: Generative AI – From Concepts to Creation