> I'm not seeing what it's added here besides shuffling up the user interface in a way that you presently and subjectively prefer?
This. But in the same sense the past 50 years merely changed interface from dusty textbooks in libraries to Google Search, and the past 100 years gave us dusty textbooks over writing to Royal Society, and that just replaced the option of asking a local whisperer or hoping you'll find answers on the Sunday mass.
Do not underestimate the power of being able to get an answer to your problem described, visualized, and perhaps complete with interactive demo to explore it further, in time it would previously take you to formulate the right search query that finally gives you relevant information.
EDIT:
And that's on top of all the arbitrary data transformations prior tools couldn't do. E.g. I'm increasingly often using GPT and Claude models to turn photos of (possibly hand-written) notes or posters into iCAL files I can immediately import into our family shared calendar.
Another frequent use case, data normalization. Paste a whole dump of inconsistently structured data multiple people collected (say, addresses of various local businesses that helped a local NGO and now are supposed to get a thank-you card for Christmas). Like, you get 200 rows of addresses in a single column, with spelling mistakes, repetitions, junk at the end, arbitrary capitalization, wrong order of address segments, and such; you need to separate it out into 5+ columns (name line 1, name line 2, street address, zip code, city, etc.) and have it all normalized.
The fastest and most robust way to do it as a one-off job, today, is to paste the whole thing to GPT-4o or Claude 3.5 Sonnet, tell it how the output should look (give one-two examples, mention some mistakes you saw), then send the message and wait 30 seconds for the job to be done for you.
(Yes, it may make mistakes - it didn't for me in recent memory, but it can. But for that, I quickly add an extra verification column for each one in LLM output, and do a simple case-insensitive substring match with original, and eyeball any data row that shows an error. And guess what, the formulas don't take much time either, since LLMs are good at writing them for you, too!)
This. But in the same sense the past 50 years merely changed interface from dusty textbooks in libraries to Google Search, and the past 100 years gave us dusty textbooks over writing to Royal Society, and that just replaced the option of asking a local whisperer or hoping you'll find answers on the Sunday mass.
Do not underestimate the power of being able to get an answer to your problem described, visualized, and perhaps complete with interactive demo to explore it further, in time it would previously take you to formulate the right search query that finally gives you relevant information.
EDIT:
And that's on top of all the arbitrary data transformations prior tools couldn't do. E.g. I'm increasingly often using GPT and Claude models to turn photos of (possibly hand-written) notes or posters into iCAL files I can immediately import into our family shared calendar.
Another frequent use case, data normalization. Paste a whole dump of inconsistently structured data multiple people collected (say, addresses of various local businesses that helped a local NGO and now are supposed to get a thank-you card for Christmas). Like, you get 200 rows of addresses in a single column, with spelling mistakes, repetitions, junk at the end, arbitrary capitalization, wrong order of address segments, and such; you need to separate it out into 5+ columns (name line 1, name line 2, street address, zip code, city, etc.) and have it all normalized.
The fastest and most robust way to do it as a one-off job, today, is to paste the whole thing to GPT-4o or Claude 3.5 Sonnet, tell it how the output should look (give one-two examples, mention some mistakes you saw), then send the message and wait 30 seconds for the job to be done for you.
(Yes, it may make mistakes - it didn't for me in recent memory, but it can. But for that, I quickly add an extra verification column for each one in LLM output, and do a simple case-insensitive substring match with original, and eyeball any data row that shows an error. And guess what, the formulas don't take much time either, since LLMs are good at writing them for you, too!)