Researchers say they've discovered a new method of 'scaling up' AI, but there's reason to be skeptical | TechCrunch
Have researchers discovered a new AI “scaling law”? That’s what some buzz on social media suggests — but experts are skeptical.
AI scaling laws, a bit of an informal concept, describe how the performance of AI models improves as the size of the datasets and computing resources used to train them increases. Until roughly a year ago, scaling up “pre-training” — training ever-larger models on ever-larger datasets — was the dominant law by far, at least in the sense that most frontier AI labs embrac...
Read more at techcrunch.com