That's an interesting approach. The problem is that the function to be applied after the convolution is binary, hence not differentiable. You could replace it with a soft variation, my (wild) guess is it would still be difficult to converge:
* If the transitions are too sharp it will behave like the stepwise, non-differentiable original
* If the transitions are too soft, the optimization will converge to some middle-state compromise that will not behave as desired when binarized
But that's just speculation. Also interesting to note that in the very relevant Kaggle competition [0], top solutions don't mention differentiable approaches (as far as I've seen, I admit I haven't looked too deeply).
* If the transitions are too sharp it will behave like the stepwise, non-differentiable original
* If the transitions are too soft, the optimization will converge to some middle-state compromise that will not behave as desired when binarized
But that's just speculation. Also interesting to note that in the very relevant Kaggle competition [0], top solutions don't mention differentiable approaches (as far as I've seen, I admit I haven't looked too deeply).
[0] https://www.kaggle.com/c/conways-reverse-game-of-life-2020/d...