Caveat, I have no experience in ML and a few parts of that article went over my head. However, this seems sensible for application specific ML models, as societal changes move the goalposts. My follow up question is: if a much more general ML model becomes so fundamental to daily life that it steers a human population’s attitudes towards a large amount of the questions they face, would the ML degrade or would it be possible that the population’s behaviour stays within some performant bound for that ML model? For example, could an airport delay remain accurate for a long period of time based on that same huge model controlling many of the outputs that are the inputs for that model? Obviously this doesn’t mean the absolute results remain static, just that the inputs to the model stay within a performant domain of values for the underlying model?
Edit: I guess this would describe less of a model describing a prediction of some external dataset and begin approaching more of a system that controlled the inputs. So perhaps this questions isn’t really relevant to the utility of most (all?) modern ML, which is predicting changes in data that is fully “external”.
Edit: I guess this would describe less of a model describing a prediction of some external dataset and begin approaching more of a system that controlled the inputs. So perhaps this questions isn’t really relevant to the utility of most (all?) modern ML, which is predicting changes in data that is fully “external”.