I kind of doubt they trained chatgpt on petabytes of application logs and web server logs. Is keeping all of this crap even useful for more than a small amount of time at this scale?
Actual good information will always be useful, most of this "big data" seems to be the equivalent of recording background static.
Actual good information will always be useful, most of this "big data" seems to be the equivalent of recording background static.