For those using the GNOME desktop environment, extensions are considered one of the system’s strongest assets. These tools, which allow users to customize their desktops as they wish, are frequently used to streamline workflows. However, managing and securing this extensive ecosystem isn’t always easy for the teams working behind the scenes. Every extension submitted by developers undergoes a rigorous review process to determine if it contains malware or faulty code.
The AI Era Ends in GNOME Extensions
Recently, this review process has become quite challenging for review teams due to unqualified code blocks generated by artificial intelligence, known as “AI slops.” Javad Rahmatzadeh, who reviews GNOME extensions, states that on some days he spends more than 6 hours checking over 15,000 lines of code. During reviews, an increase in meaningless code patterns, such as unnecessary “try-catch” blocks, was noticed. When developers were questioned about this, they confirmed that the code was generated by artificial intelligence.

Due to the high volume and decline in quality, GNOME management has decided to update its plugin review rules. According to the new rules, plugins that clearly show signs of being generated by AI will no longer be approved and will be rejected. Signs of rejection include: excessive use of unnecessary code, inconsistent code styles, the use of non-existent fictitious APIs, and forgotten AI commands within the code.
This decision does not completely ban the use of AI tools. Using code completion features or leveraging AI as a learning tool is still permitted. The new rule targets developers who simply write plugins for AI without fully understanding what they are doing or mastering the code. Like many open-source projects, GNOME is taking measures to protect its system from low-quality content.
What are your thoughts on this? Do you think such restrictions should be placed on the use of AI in coding and software development processes, or should developers be given complete freedom?

