I tried that, and ran into roadblocks since the app under test is old Visual Basic (which is half compiled and half interpreted) and then it used a third party library which has quite sophisticated anti-decompilation features.
Yes, but post training cannot possibly account for all possible use cases. Sane defaults are fine, you can't really do much about sampling parameters in chatbots and coding harnesses anyway. And when making an API call, you have to actively change the parameter in your payload. I don't believe there's any real risk.
The risk is that people tweak it, potentially on accident, and then think the model is bad instead of understand they are using it wrong. This causes potential reputational damage by exposing the control.
Ssh server doesn't make sense for an iPhone. How would that even work? It wouldn't be able to do anything or be a worse experience than something properly designed for the user rather than trying to force a 50 year old computing model onto a phone.
You say this matter of factly and yet I've seen countless people talk about using termux more than a desktop shell.
Maybe iPhone is different but most phones you can connect a keyboard to, making the shell pretty usable. Not my cup of tea but I have tried it. I'm still holding out on the dream that a good Linux phone might exist one day.
In the grand scheme of things very little people use Termux on Android out of the billions of people who use Android. Additionally Termux's design is not aligned with Android's app model which has caused many headaches for them. Trying to force a terminal to exist on a phone is possible but it is being forced and is not a natural product that would exist if one was trying to design the best user experience.
Are you referring to any security features in particular? There's a new zero-click exploit every 6 months for iOS, and NSO Group is showing no signs of slowing.
Then why does the creator keep complaining that the maintainers he onboards keep getting poached by AI companies. It seems more like it is scaling too well.
>They are also not consuming new AI music to be able to develop influences and synthesize new ideas.
If not they most definitely are listening to other music that influences them. If you have proof that such a producer listens to 0 music feel free to share it.
They're describing the "music" that's churned out almost entirely hands off to siphon royalties. Even the creator isn't listening to 100% of what they're uploading, it's spam that can be produced in massive quantities and can overwhelm a platform if left unchecked (as the article describes, AI music is 1-3% of actual listens by users but 44% of uploads).
Actual artists who need years to create a few hours of handcrafted content don't have a chance in an environment where hundreds of hours of slop can be generated in less than a day for a few hundred bucks. Platforms like Deezer recognize they need to address that imbalance somehow or they'll eventually lose their high quality contributors in a vicious cycle if it becomes impossible to compete.
If history is a lesson (of going from lower level to higher level programming languages), the exact opposite will happen: there'll just be so much stuff out there that any eventual gain in efficiency will be dwarfed in the grand scheme of things.
reply