Open source maintainers are drowning in junk bug reports written by AI
Software vulnerability submissions generated by AI models have ushered in a "new era of slop security reports for open source" – and the devs maintaining these projects wish bug hunters would rely less on results produced by machine learning assistants.
Seth Larson, security developer-in-residence at the Python Software Foundation, raised the issue in a blog post last week, urging those reporting bugs not to use AI systems for bug hunting.
"Recently I've noticed an uptick in extremely low-qualit...
Read more at theregister.com