Hacker News new | past | comments | ask | show | jobs | submit login

This is a lot more wide-spread than "some kids on Reddit". Maybe 30% of the kids in my son's class are using this or related tools.



Any ideas on how to solve this issue of kids cheating with GPT3 essays?


Realistically? Grade based on thought process and validity of the argument, not whether it has spelling or grammar mistakes. GPT3 is still pretty incoherent over the span of enough text.


Kids' writing can also be very incoherent, sometimes more so. But incoherent writing still counts as turned in work and will get you points and teacher feedback, but GPT-3 generated should not.


This will not be the case in 2-3 years.


Same way you solve the issue of kids cheating by having someone else write their essay.


I honestly don't think it's possible to solve, other than by increasing the amount of evaluation that's done in locked down conditions.

I cannot imagine a detection mechanism that could not itself be defeated by some tweaks to the prompts being used to generate the essays.

It's effectively the same problem as "prove that this kid didn't get their friend/cousin to write the essay for them".


It's the parents' responsibility. No one outside the household can do anything about it imo.

Using AI to write will cause the same issues as:

- phones, some people don't try to remember directions, phone number or addresses

- calculators, some people cannot do easy math

- computers, some people cannot write with a pen, cannot spell without spellcheck


Other than the writing with a pen part that pretty much sums me up and I grew up well before all this fancy supercomputer in your pocket stuff.


Make them write anything gradable in-person, while being monitored by a teacher.

Cheaters gonna cheat, no matter what. This will at least get the group back to pre-conversational AI standards.


Test the kids on their own essays, for example? Maybe this could itself be automated with GPT-3?

The highest-quality answer involves skilled teachers with enough time who know and understand their students. (Actually the very highest might involve personal tutors but let's leave that aside.)

Going down a few steps you might combine the automated approach with skilled teachers and maybe add human editors who can do support work asynchronously?


I'm not super opposed to it.

Watching my son try it, he spends more time reading the created essay and correcting mistakes in it than he does writing one himself. The checking process is very similar to marking, and I think it's possible he's learning more this way.

(Also, he's madly trying to automate fact checking which is doing no harm to his programming at all!)


Using GPT-3 might be a better skill to have.


You mean clicking a button?


No, I mean managing an AI to achieve a random task. Prompting, iterating, filtering - they all require high level input from the user. A LLM is a complex beast, not easy to use (yet).


Students that can't write well can't tell if the generated output is good enough either.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: