Discover Turnitin's AI Writing Detection Feature

Discover Turnitin's AI Writing Detection Feature

Table of Contents

  1. Introduction
  2. Educators' Use of Turnitin's AI Writing Detection Feature
  3. Findings from the First Six Weeks of Use
    1. Percentage of Submissions Containing AI Written Text
    2. Tweaks to the Platform's AI Detection Feature
    3. Minimum Word Count Raised
    4. Changes to Detector Analysis of Opening and Closing Sentences
  4. Educator Feedback and Concerns
    1. False Positives for AI Writing Detection
    2. Confusion about How to Interpret Turnitin Scores or AI Writing Metrics
  5. Handling False Positives
  6. Conclusion
  7. References

Educators' Use of Turnitin's AI Writing Detection Feature

Turnitin, a leading provider of academic integrity solutions, recently launched a new AI writing detection feature that allows educators to identify submissions that contain AI-generated text. In the first six weeks of its use, the platform processed 38.5 million submissions and found that 3.5 percent of those submissions contained more than 80 AI-written text. Additionally, just under one-tenth of submissions contained at least 20 AI-written text.

In a new blog post, Turnitin's Chief Product Officer, Annie Chachitelli, explains the findings and details a few tweaks to the platform's AI detection feature in response to feedback from educators using it since its launch in early April. The updates to the AI detection feature include an asterisk added to scores under 20 percent, and an asterisk will now appear next to the indicator score or the percentage of a submission considered to be AI-written text when the score is less than 20 percent. Since the analysis of submissions thus far shows that false positives are higher when the detector finds less than 20 percent of a document is AI-written, the asterisk indicates that the score is less reliable, according to the blog post.

Findings from the First Six Weeks of Use

Percentage of Submissions Containing AI Written Text

The findings from the first six weeks of use of Turnitin's AI writing detection feature by educators are significant. The platform processed 38.5 million submissions and found that 3.5 percent of those submissions contained more than 80 AI-written text. Additionally, just under one-tenth of submissions contained at least 20 AI-written text.

Tweaks to the Platform's AI Detection Feature

Turnitin has made a few tweaks to the platform's AI detection feature in response to feedback from educators using it since its launch in early April. The updates to the AI detection feature include an asterisk added to scores under 20 percent, and an asterisk will now appear next to the indicator score or the percentage of a submission considered to be AI-written text when the score is less than 20 percent. Since the analysis of submissions thus far shows that false positives are higher when the detector finds less than 20 percent of a document is AI-written, the asterisk indicates that the score is less reliable, according to the blog post.

Minimum Word Count Raised

The minimum number of words required for the AI Detector to work has been raised from 150 to 300 because the detector is more accurate the longer a submission is, Chachatelli said. Results Show that our accuracy increases with a little more text, and our goal is to focus on long-form writing. We may adjust this minimum word requirement over time Based on the continuous evaluation of our model.

Changes to Detector Analysis of Opening and Closing Sentences

Turnitin has observed a higher incidence of false positives in the first few or last few sentences of a document, Chachatelli said. Many times these sentences are the introduction or conclusion in a document. As a result, we have changed how we aggregate these specific sentences for detection to reduce false positives.

Educator Feedback and Concerns

False Positives for AI Writing Detection

In their feedback, instructors and administrators' main concern is false positives for AI writing detection in general and in specific cases within our writing detection, according to the blog post. Since the release of the detection feature, Turnitin has seen that real-world use is yielding different results from lab tests performed during development, Chachatelli said.

Confusion about How to Interpret Turnitin Scores or AI Writing Metrics

Findings from the detector's first six weeks in use by educators include confusion about how to interpret Turnitin scores or AI writing metrics, Chachatelli said. She explained that the detector calculates two different statistics: the AI writing metric at the document level and at the sentence level. As a result of educator feedback, we've updated how we discuss false positive rates for documents and false positive rates for sentences, she said.

Handling False Positives

For documents with over 20 percent of AI writing, Turnitin's document false positive rate is less than one percent, which was again validated by the new analysis of 800,000 pre-GPT writing samples. This translates into one human-written document out of 100 being incorrectly flagged as AI-written, Chachatelli said. While one percent is small, behind each false positive instance is a real student who may have put real effort into their original work, she said. We cannot mitigate the risk of false positives completely given the nature of AI writing and analysis, so it is important that educators use the AI score to start a Meaningful and impactful dialogue with their students in such instances.

Turnitin has published a guide for educators on how to handle false positives on its Website.

Conclusion

Turnitin's new AI writing detection feature has yielded significant findings in the first six weeks of its use by educators. The platform processed 38.5 million submissions and found that 3.5 percent of those submissions contained more than 80 AI-written text. Additionally, just under one-tenth of submissions contained at least 20 AI-written text. Turnitin has made a few tweaks to the platform's AI detection feature in response to feedback from educators using it since its launch in early April. Educators' main concern is false positives for AI writing detection in general and in specific cases within our writing detection. Turnitin has published a guide for educators on how to handle false positives on its website.

References

  1. Chachitelli, A. (2021, June 1). Educators' Use of Turnitin's AI Writing Detection Feature. Turnitin. https://www.turnitin.com/blog/educators-use-of-turnitins-ai-writing-detection-feature.

Most people like

Find AI tools in Toolify

Join TOOLIFY to find the ai tools

Get started

Sign Up
App rating
4.9
AI Tools
20k+
Trusted Users
5000+
No complicated
No difficulty
Free forever
Browse More Content