Failover to Human Intelligence - Max Chernyak
There’s no denying that AI is getting very capable, but one thing keeps bothering me: what happens if something goes wrong?
Right now, self-driving cars still require human monitoring and intervention (outside of specially-designated areas). Isn’t this also true of a sufficiently complex system where you might need to intervene quickly in case AI fails to resolve an issue? Worth considering, right?
You might say — so what? AI-written code is arguably better, often with more comments and docs, hu...
Read more at max.engineer