Yeah maybe. But I think the thing I like is that is takes me a much shorter amount of time to create solutions for my users and myself. Then I can worry about “tech adoption” once I’ve achieved a relevant solution to my users.
If performance is an issue then sure let’s look at options. But I don’t think it’s appropriate to expect that sort of level of insight into an optimised solution from llms - but maybe that’s just because I’ve used them a lot.
They’re just a function of their training data at the end of the day. If you want to use new technology you might have to generate your own training data as it were.
I haven't seen anyone with zero computer skill build apps but I've definitely started building and deploying apps outside my day job that I've been able to put together quickly with the aid of Claude artefacts or copilot. It's really exciting!
Yeah it's interesting to see how it's getting on! I wrote my PhD simulation code in it from the ground up as it had nice fundamental abstractions for parallizable code. Of course now it's just Python and Scala/Java but Julia was great for my purpose.
I shared the concerns of this author. I wrote a free tool to use simple heuristics to predict if text was generated by ai or not - https://isthistextai.com/
To be honest, the heuristics here are kind of bad... The first two paragraphs from Wikipedia page for Viterbi algorithm return as AI generated, while several samples from TinyStories, a GPT-4 generated dataset of children stories for training small LMs, get flagged as certainly human. Maybe the weighing can get some work, or you can just switch to a proper classifier.
This is really cool. Whilst it's been fun mucking around with next.js etc (and arguably that's for a different purpose) for an out the box website Jekyll has proven itself time and again. Looking forwards to trying this out.
A lot of the time best practice can also mean “we did it this way last time and it was ok”. I don’t think anyone is saying don’t find improvements in “what is currently defined as best practice” and if they are then that’s your problem.
If performance is an issue then sure let’s look at options. But I don’t think it’s appropriate to expect that sort of level of insight into an optimised solution from llms - but maybe that’s just because I’ve used them a lot.
They’re just a function of their training data at the end of the day. If you want to use new technology you might have to generate your own training data as it were.
reply