With the advent of AI-generated text, a problem has arisen which, in cases such as education, can have a very negative impact: can you tell whether a text has been generated by AI or written by a human?

Until recently, AI detectors were not very reliable. But for some time now, they have been doing their job more accurately.

How do they work?

These detectors are, like text generators, Machine Learning models trained to recognise AI and human-generated text. They use some patterns such as:

  • grammatical complexities,
  • sentence length uniformity,
  • spelling
  • and more technical concepts such as perplexity.

We are testing detectors like https://isgen.ai/es and https://quillbot.com/es/detector-de-ia and to our surprise, they work really well! At least with the examples we have used.

Are they really reliable?

It is true that, as far as we have found out, we cannot completely trust the results. These AI detection systems are themselves AI trained with data and, because of their inductive nature, can NOT be considered completely reliable.

For the teacher, it could be a first approximation to the suspicion of whether a work has been self-made or not. However, it would not be of much value either. There is always the possibility that the work was developed by the student’s father, mother, uncle or grandmother, with or without AI, and we are in the same situation.

Wouldn’t it be preferable to rethink the tasks we do with our students and move them into more creative and less automatable territory? The risks introduced by these clever AIs can be opportunities to transform education. Let’s take advantage of them!