This is a cool idea like the potato powered clock. There are so many holes here. Let me just pick one. They don't seem to account for reassembly issues which is a huge problem and vastly multiplies your problem space depending on how you implement the solution. What the fuck am I going on about you ask?
Think of it like this: sig: abc
Traffic a[1] b[2] c[3]
where the packets are properly ordered in 1 2 3 order. Simple fragmentation could be sending them out of order - I believe this paper accounts for that. What if instead you send a[1] b[2] b[2] c[3]? Windows assembles this one way (depending on the version), linux another, bsd another. It's super fun. Then what if you send c[3] b[2] c[3] a[1] b[2]. One could argue, "hey d*ckhead we're going to normalize the traffic first" the problem is what is normal? Stevens had tons of good work on this. Some systems have a 'normalization' standard that's similar to how their network gear works. Also I find the fact that they say 'all the patterns' must be matched for the sig to fire. Does that include an or? Are they breaking the or down into sub detectors or something? The 10,000 signature thing is also kind of fake as the number of signatures constantly grows like the number of amazing taylor swift songs.
All in all these authors need to go read the old breakingpoint test standards, or ixia, or nss, or really anyone.
Reassembly and ordering is one of the major focus points of this research. They talk very much about this and the performance implications. You can literally "ctrl + f 'reassembly'" ??
This all has very real world applications like with Corelight.
> One could argue, "hey d*ckhead we're going to normalize the traffic first" the problem is what is normal?
This is why this kind of IPS integrates with a firewall well. Two decades back, my team built very fast for its day firewall that would only let assembled-and-refragmented fragments through.
There was no confused ordering past the firewall, and no scenario of IDS/IPS and victim defragmenting differently.
Think of it like this: sig: abc Traffic a[1] b[2] c[3]
where the packets are properly ordered in 1 2 3 order. Simple fragmentation could be sending them out of order - I believe this paper accounts for that. What if instead you send a[1] b[2] b[2] c[3]? Windows assembles this one way (depending on the version), linux another, bsd another. It's super fun. Then what if you send c[3] b[2] c[3] a[1] b[2]. One could argue, "hey d*ckhead we're going to normalize the traffic first" the problem is what is normal? Stevens had tons of good work on this. Some systems have a 'normalization' standard that's similar to how their network gear works. Also I find the fact that they say 'all the patterns' must be matched for the sig to fire. Does that include an or? Are they breaking the or down into sub detectors or something? The 10,000 signature thing is also kind of fake as the number of signatures constantly grows like the number of amazing taylor swift songs.
All in all these authors need to go read the old breakingpoint test standards, or ixia, or nss, or really anyone.