Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”
Bias in artificial intelligence systems, or the fact that large language models, facial recognition, and AI image generators can only remix and regurgitate the information in data those technologies are trained on, is a well established fact that researchers and academics have been warning about since their inception. In a blog post about the release of Llama 4, Meta’s open weights AI model, the company clearly states that bias is a problem it’s trying to address, but unlike mountains of researc...
Read more at 404media.co