Low latency HLS is creating partial segments by bucketing 200ms of frames instead of 6s segments in standard HLS. Whereas in webrtc, the endpoint is sending the frame as soon as it is ready.
The apples apples comparison here is 0ms (in webrtc, no send side buffering) vs 200ms (in low latency HLS) or 6s (in standard HLS). This is independent of latency of the endpoint from CDN or source.
Another distinction is playback wait time, i.e., how quickly upon joining can it start rendering video.
I’m assuming the full reference picture (typically, an IFrame or a golden frame depending on the codec) in low latency HLS is only available at the start of each 6s segment and not in partial segments. So joining a live stream, the receiving endpoint would have to wait at most 6s before rendering.
Similarly in webrtc, it’s up to the system to generate a reference frame at regular intervals, as low as every second. Or to do it reactively, a receiving endpoint can ask the sender to send a new reference picture. This is done via a Full intra request, the wait time can be as quick as 1.5 times of the round trip time (as new codecs can generate a new iframe instantaneously upon receiving a request). There’s a slight cpu penalty for this which means that the sender getting too many full intra requests may typically throttle the response to 1s.
So Apples to Apples comparison for wait time would be up to 1s for webrtc vs 6s for HLS.
You don't necessarily need to wait for a reference picture to start playback. Modern codecs all support "intra-refresh", which allows you to reconstruct a reference frame from a set of existing frames. With that, you can set periodic intra refresh much lower than 6s keyframe intervals.
An HLS segment can carry any number of GOPs. A GOP length of FPS/2 or FPS/4 will get you an I-frame pretty quickly allowing the GOP to be decoded. MPEG-DASH can do the same IIRC. So there doesn't need to be a segment length delay in playback and typically is not.
The apples apples comparison here is 0ms (in webrtc, no send side buffering) vs 200ms (in low latency HLS) or 6s (in standard HLS). This is independent of latency of the endpoint from CDN or source.
Another distinction is playback wait time, i.e., how quickly upon joining can it start rendering video.
I’m assuming the full reference picture (typically, an IFrame or a golden frame depending on the codec) in low latency HLS is only available at the start of each 6s segment and not in partial segments. So joining a live stream, the receiving endpoint would have to wait at most 6s before rendering.
Similarly in webrtc, it’s up to the system to generate a reference frame at regular intervals, as low as every second. Or to do it reactively, a receiving endpoint can ask the sender to send a new reference picture. This is done via a Full intra request, the wait time can be as quick as 1.5 times of the round trip time (as new codecs can generate a new iframe instantaneously upon receiving a request). There’s a slight cpu penalty for this which means that the sender getting too many full intra requests may typically throttle the response to 1s.
So Apples to Apples comparison for wait time would be up to 1s for webrtc vs 6s for HLS.