There are probably, I call them techno utopians who would say, feed all data to the algorithm, give it an objective, and it will do the right thing. And I was like yeah, the reason that falls down is the algorithms don't understand long term effects often, nor do they understand how people might respond to it, nor do they understand your intent for the product, and I think it's really important for product managers to play that role. That is our job. When you are working on algorithmic heavy products, your job is figuring out what the algorithm should be responsible for, what people are responsible for, and the framework for making decisions.
Humans define boundaries, algorithms execute
Execution → Technical Tradeoffs
I think as a product manager and especially product managers working on systems that are heavy on machine learning or operations research and optimization, to think about where you want a person to make a decision and where you want the machine to be off to the races and to think about that as a product design problem because there actually is actually a computer interface that you have to think about there.
We always start with expert editorial judgment to curate the most important and interesting stories. But on top of that, we're training algorithms on specific data sets, like editorial important scores that actually come from our journalists.
AI is a management technology. The thing it does is manage intelligence and other intelligences. If you ever watched the Monty Python 'can't guard him if he's a guard' skit, it's like that. We're talking past each other.