Hacker News new | past | comments | ask | show | jobs | submit | more b3morales's comments login

Thaler's assertion seems very short-sighted. If the program/system owns the copyright, who administers that? Is his software capable of granting licenses to allow the art to be published? How did he get permission from it to reproduce the work(s)? How does he even know that it wanted him to register the copyright with the Copyright Office? Perhaps it would prefer that the work was CCO/public domain.

I'm not just being snarky -- I think these are real questions that follow from saying that the system holds the copyright (and I am dubious that he has consistent answers to them).


I'm very much in your camp. Recognizing the personal sovereignty of what we currently call "AI" is pretty silly. How do we know the wants of said AI? Can it advocate for itself? The answers are pretty clearly "No, this AI has no wants or feelings".

I will be the first to fight for true AI to have right as a person, but these generative tools are just that: Generative tools. They have no continuance of memory, they have no sentience or sapience. We should be approaching the topic far more carefully than we are, to be on the lookout for those things, but the evidence points strongly in the opposite direction right now.


> We should be approaching the topic far more carefully

It's the major players in a billion dollar industry driving the conversation while also benefitting from the topic being approached as it is. I don't see a way for it to go the "right" direction while that's the case.


In theory at least your tarpit won't be prevalent enough to have much effect. Since you're making it all up, no one else will have written about the same ideas (as they might if you were writing true things), and your mischief will be lost in the sea of all the other input.

> Because the logic of the philosophy is sound, the model cannot ignore the findings

This isn't how the LLMs like ChatGPT work: they have no novel or abstract reasoning power. They cannot distinguish truth from fiction, or logic from illogic, except to the extent that truth and logic are represented more than their opposites in the inputs. ChatGPT is a very large and complex "database" of sorts containing statistics of word sequences.

(Of course there's a longstanding question in AI research about when/if such a thing can become large and complex enough to cross the threshold to "actually" reasoning. But I don't think there's much debate that we're not there yet.)


That's how it should work, but it's not how it does work. I'm willing to give up my copyright on my stuff as soon as Microsoft, Google, MGM, Atlantic, etc. give up theirs. Until then, nope: I get the same rights and protections.


That's a fair argument, and one I can fully support :)


Even if we grant that this decision was correct (perhaps) and that it is analogous enough that it applies to this situation (I'm very skeptical), there are still restrictions on what you can do with any information you'd acquire this way.

If $FAMOUS_AUTHOR throws out a manuscript copy of their new book, you may be legally allowed to pick it out of their garbage, read it, and even show it to your friends, but you certainly cannot publish it as your own work (you can't even publish it under $FAMOUS_AUTHOR's name).


This characterization is only reasonable if the output of the AIs is as free to consume as its inputs were. Otherwise it's a private fencing off of the commons: shoulders to stand on for me but not for thee.


This is not the same thing at all. You know whether a tool like VLC works because it has a pretty well-defined scope for "works": it plays the video or audio file you clicked on.

If you're asking ChatGPT to teach you something, you have no such easy verification you can do: you essentially need to learn the same material from another source in order to cross-check it. Obviously this is easy for small factual questions. If I ask ChatGPT the circumference of the Earth, I can quickly figure out whether it's reliable or not on that point. But at the other extreme if I ask it to do a music theory analysis of the Goldberg Variations, it's going to take me about as much work to validate the output as to have just done the analysis myself.


I don’t think learning in many situations is as black and white as you assert


As a candidate, trying to gather as much information as possible in a short time, it can actually be really useful to re-ask a question that you already "know the answer" to. Different people will offer different perspectives on things, and you can end up with a fuller picture.

That said you should phrase it in a way that doesn't sound like you're just asking for the same information that's already available: "So the posting said the team uses Agile; how does that show up in a typical week?"


That doesn't make sense -- the lower deck of the observation car is the cafe, which is the main (effectively only) place to get food, coffee, etc. when you're in coach, since the dining car is reservation only and you're behind everyone in a room.


The observation car is open to all passengers, no exceptions. The Dining Car is still sleeper-car only, although they seem to want to change it back to taking paid seating from coach/business some time this year.


Thanks! I added a note to the blog post, saying that I might be wrong here.


If there was a hammer that, held one way, drove nails perfectly in one blow, and held another, made it look like the nail was driven but actually broke it right at the junction so that the work pieces weren't fastened... I'd say that the second way was the wrong way to use that hammer.


Not only wrong, but dangerous, because nails are often used to fasten structural elements of houses, and incorrect but hard-to-detect flaws like this could result in collapse.

Similarly, if ChatGPT gives you an answer high in truthiness but low in accuracy, it could negatively impact you, whether loss in credibility if you repeat nonsense in front of someone knowledgable, or even worse if you use the incorrect knowledge to try to solve a real world problem.


> help me practice a foreign language I've slowly been forgetting

So with the software stuff you can pretty easily verify the output: either the suggestion fixes the problem or it doesn't. But how can you trust it for something like this where you can't distinguish what's good and bad output? It could be leading you down the garden path with this language exercise. And it's not in the helpful thinking tool/bouncing ideas around category, either: there are rules of grammar that you don't know, and you don't know whether ChatGPT knows them either.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: