The objective of the training is bootstrappig a prettier. String template (pattern in the code) for each node type.
So we can learn things like arguments are separated by “, “ but if the total # of chars of all of them is greater than X it’s more common to use “,\n”. Stuff like that.
We had LSTMs for a while but they performed worse than a decision tree with a few key features as input.
The objective of the training is bootstrappig a prettier. String template (pattern in the code) for each node type.
So we can learn things like arguments are separated by “, “ but if the total # of chars of all of them is greater than X it’s more common to use “,\n”. Stuff like that.
We had LSTMs for a while but they performed worse than a decision tree with a few key features as input.