Hacker News new | past | comments | ask | show | jobs | submit | more b3morales's comments login

GP was most likely thinking not of large roundabouts that take the place of major intersections, but the smaller traffic-calming type where the center can be as minimal as a 3 foot wide concrete block. They can be fit into pretty much any intersection because their purpose is not to change the path so much as to, frankly, eat up whatever extra space is available, which forces cars to slow around them.


Yeah, I'm thinking about the 4 lane urban roads (or 2 lane one-way).


This idea ("make a stronger license") has come up in previous discussions of Copilot as well[0].

The problem is that the Copilot project doesn't claim to be abiding by the license(s) of the ingested code. The reply to licensing concerns was that licensing doesn't apply to their use. So unfortunately they would just claim they could ignore your hypothetical Free³ license as well.

[0]: https://news.ycombinator.com/item?id=34277352


> The problem is that the Copilot project doesn't claim to be abiding by the license(s) of the ingested code. The reply to licensing concerns was that licensing doesn't apply to their use.

I think github is largely correct in their view on licenses. However I would argue that you could create a stronger legal binding than say a GPL-3 license. For instance you could require and enforce that anyone that wishes to read the repo must sign a legal contract or EULA: "By decrypting this git repo you are agreeing to the following licenses, restrictions, contractual obligations, ..."


Some core designers of Swift, at least Doug Gregor, John McCall, Dave Abrahams, were very much part of the C++ world. Abrahams mentions Gregor's proposals for C++ here: https://youtu.be/6JYAXADQmNQ?t=1157


You do have to remember to use it, but the thing to keep in mind is that you can pipe a list of ANYTHING into it. Any list of text items you can search through with a text query is fair game.

   git log --oneline | fzf
for example is one of my favorite tricks. Instead of scanning by eye or repeatedly grepping to find something, it's a live fuzzy filter. And depending on how deep you want to go you can then add key bindings to check out the selected commit, or preview the full message, or anything really.


And don't forget that your selection is printed to stdout:

    git log --oneline | fzf | cowsay
Sky's the limit.


> monitoring that many cameras is impractical

Regardless of how many there are, I believe the point is not to monitor them live but to have recordings if the store wants to bring charges for something.


Plagiarism and copyright infringement have some overlap but they're distinct. You can be guilty of plagiarism by taking something in the public domain and presenting it as your own work.


Why would you expect the AI to not be a paid service like a human tutor is? ChatGPT has a $0 tier, but it's also in a publicity phase, and they already have a "pro" subscription tier. Microsoft/GitHub likewise are charging for extended access to Copilot, and so is Stability with their Dream Studio application.


It'll be cheaper for the school, the parents or the social services to provide you access to an AI tutor than to more human instruction. Already, learning games can prevent or cure (some) learning disabilities via engagement and repetition in a cost-effective way.


I think you're mixing up concerns from different contexts. AI as a generalized goal, where there are entities that we recognize as "like us" in quality of experience, yes, we would expect them to have something like our emotions. AI as a tool, like this Bing search, we want it to just do its job.

Really, though, this is the same standard that we apply to fellow humans. An acquaintance who expresses no emotion is "robotic" and maybe even "inhuman". But the person at the ticket counter going on about their feelings instead of answering your queries would also (rightly) be criticized.

It's all the same thing: choosing appropriate behavior for the circumstance is the expectation for a mature intelligent being.


Well, that's exactly the point: we went from "AIs aren't even intelligent beings" to "AIs aren't even mature" without recognizing the monumental shift in capability. We just keep yelling that they aren't "good enough", for moving goalposts of "enough".


I'm glad to see this comment. I'm reading through all the nay-saying in this post, mystified. Six months ago the complaints would have read like science fiction, because what chatbots could do at the time were absolutely nothing like what we see today.


No, the goalposts are different according to the task. For example, Microsoft themselves set the goalposts for Bing at "helpfully responds to web search queries".


Who is "we"? I suspect that you're looking at different groups of people with different concerns and thinking that they're all one group of people who can't decide what their concerns are.


The point of that one, to me, isn't that it was wrong about a fact, not even that the fact was so basic. It's that it doubled and tripled down on being wrong, as parent said, trying to gaslight the user. Imagine if the topic wasn't such a basic fact that's easy to verify elsewhere.


Your problem is you want your tool to behave like you, you think it has access to the same information as you and perceives everything similarly.

If you had no recollection of the past, and were presented with the same information search collected from the query/training data, do you know for a fact that you would also not have the same answer as it did?


So we've created a digital politician?


Yes, they aren't different; they have exactly the same problem, which is why StableDiffusion is also the subject of a lawsuit, like Copilot.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: