Well, my recruiter callback ratio is likely 1 out of 5, despite having a very VERY niche profile: a PhD focused on NLP for creative text generation, especially in video games, and a prior career as a game developer.
Needless to say, I've only focused on roles that fit that narrow profile. One of the recruiters that contacted me didn't even know I worked in games, despite it making up the bulk of my work experience (including as a lead developer).
Considering how closely I match this narrow profile, and the number of people that likely do, it's weird how low my callback ratio has been.
> One of the recruiters that contacted me didn't even know I worked in games
I get that all the time with my setup. "you look like a good fit and have lots of experience for XYZ tech". Nowhere on my resume does it even mention it. Sometimes I have to look it up and see what they are talking about. One of them even went on and on about my current job. Despite it only having the start date in that spot and no exp on what I do here.
It is blindingly obvious they did not read my resume. They are keyword scumming and hoping for the best.
> Since answers will take longer to generate, ChatGPT will display a progress bar and send an in-app notification if you switch away to another conversation.
I wonder if they are optimizing for batching, which might introduce even more delays.
To you and GP - I think the original title was misleading, and I tried not to and hope I succeeded in not editorialising. Editorialising is injecting personal opinion instead of focusing on the facts, which is the opposite of what I did. I removed the misleading ambiguity and instead stated as clearly and shortly as I could the essence of the story.
The original title leaves open the idea that this decision doesn't have a precedent which it is overturning, which is unnecessary and misleading. And the original title strongly implies that the recent revelation from Meta only relates to the U.S., which similarly is not true and is misleading.
For a U.S. audience that might seem to make sense, but HN has international readers.
The decision on the title was already made - in the source. Unless the source violated HN guidelines, its title should be kept as is and not "improved".
When the videos are completely unrelated to your search, but just happen to be new/popular videos, then yes it's useless. Surfacing relevant videos would make sense.
For example, searching for climbing comp videos and getting a completely unrelated video about some new tech gadget released within the last couple days from a random popular content creators makes no sense.
Clearly, it works for Google (content creators intentionally make click-baity thumbnails and titles because Google encourages it), but it's user hostile: it's designed to suck you into a vortex, which is not what the user was intending in the first place.
That said, all content platforms do this right now, so my intention isn't singling out Google. It's frustrating nonetheless.
Interesting perspective, considering a paper ByteDance just released yesterday [1] has much worse video quality. If your comparison is to real videos, then for sure the quality isn't great. If instead you compare to other released research, the this model is one of the best released thus far.
> The moment you post to reddit, they can do anything they want with the content. It is now theirs.
This is patently false. The content is not theirs, users grant Reddit a license to their content. See the TOS [1] you spoke of:
> You retain any ownership rights you have in Your Content, but you grant Reddit the following license to use that Content:
This isn't just some bit of semantics either. If you can contact the users, you could request a license for the content yourself. So really the only bastion Reddit has is technical measures for preventing scraping, since publicly accessible data is allowed to be scraped in the US due to legal precedent. That is, nothing in the TOS prevents other parties from displaying the data in other formats (as long as they don't use the API). The thing is, all the apps did make use of the API because they were not adversarial with Reddit, as opposed to webapps like libreddit that circumvent the API altogether.
I actually did invest in projects rather than papers and I definitely feel I paid the price. It's the main reason I opted for a postdoc rather than going straight on the academic market: the quality of my research was high, but the number of publications I have is too low to be competitive. At least that's what my PhD advisor said and I honestly agree, especially after speaking with people who just landed tenure-track jobs and more senior professors.
What I have to say may come across as harsh but I only intend it to be helpful.
If you invested in projects, and the investments did not pay off, I see three likely explanations.
First, maybe your investment function is miscalibrated.
Second, maybe your investment function is well calibrated but your time horizon is too long.
Third, maybe you are unlucky.
I have no insights into what happened in your case, I don't even know what field your are in.
But, case 1 suggests you may be unsuccessful in academia.
Case 2 suggests possible success after adjustment to more immediate reward.
Case three suggests possible success after a difficult recovery process and further investments.
None are ideal paths but cases 2 and 3 suggest possible recovery strategies if you are deeply committed to academia. The optimality of such an approach is highly subjective.
You likely should not feel like academia is the only available path, if you do that is a red flag that something is amiss.
Both models generate an answer after multiple turns, where each turn has access to the outputs from a previous turn. Both refer to the chain of outputs as a trace.
Since OpenAI did not specify what exactly is in their reasoning trace, it's not clear what if any difference there is between the approaches. They could be vastly different, or they could be slight variations of each other. Without details from OpenAI, it's not currently possible to tell.
Sorry, but that does not seem to be the case. A friend of mine who runs a long context benchmark on understanding novels [1] just ran an eval and o1 seemed to improve by 2.9% over GPT-4o (the result isn't on the website yet). It's great that there is an improvement, but it isn't drastic by any stretch. Additionally, since we cannot see the raw reasoning it's basing the answers off of, it's hard to attribute this increase to their complicated approach as opposed to just cleaner higher quality data.
EDIT: Note this was run over a dataset of short stories rather than the novels since the API errors out with very long contexts like novels.
Needless to say, I've only focused on roles that fit that narrow profile. One of the recruiters that contacted me didn't even know I worked in games, despite it making up the bulk of my work experience (including as a lead developer).
Considering how closely I match this narrow profile, and the number of people that likely do, it's weird how low my callback ratio has been.