The first stage of the CCC is written in pretty heterogeneous conditions. There are several thousand competitors across the world. Students use a language of their choice and their work is graded by their teacher against a rubric sent out by the contest organisers. I recall that the teacher has to send the work to Waterloo, postmarked within a couple of days after the contest.
Sometime in the summer, the University of Waterloo then flies out and puts up the top 15 or 20 students in that more informal contest to compete in what was the "CCC Stage 2" and is now the "Canadian Computing Olympiad." This second contest is done under more controlled conditions and its results largely determine which four students will represent Canada at the International Olympiad in Informatics.
The first stage has always relied on teachers' honesty. The contest organisers mail contest packets to teachers in advance of the contest. Teachers have some time between contest day and when the results need to be mailed back to Waterloo. Students are comparing notes and discussing the contest immediately after the contest.
Nonetheless, in my experience (more than 15 years ago), all or almost all of the students who make it to the second stage deserved to be there. I hope that continues to be true.
Take this article with a grain of salt. Much of its argument quotes from TurnItIn, which was already an unethical scourge in earlier days, and already ruined many students' lives with traditional information retrieval, but more recently pivoted to ruining students' lives with "AI" instead[1].
Why isn't it possible to run these contexts in a computer room with no access to the Internet? This plus a minimal amount of supervision would remove cheating?
Am I missing something?
Maybe schools don't have a computer room these days?
The second stage (on-site, ~20 competitors) is done like that.
The first stage (this one, with ballpark 10000 competitors) is distributed and done in very heterogeneous environments, and it relies on local high school teachers taking time out of their day to set up whatever's needed, proctor the contest, and grade the results.
So the problem is that the teachers were dishonest or not proctoring well, right? It doesn't sound to me like AI was the problem per se. Before LLMs you could have had participants using Stack Overflow to cheat.
> Before LLMs you could have had participants using Stack Overflow to cheat.
"could have", yes, but mostly didn't until LLMs showed up.
The problem is that LLMs lower the bar to cheating: using Stack Overflow still requires a pretty solid knowledge of the domain (and in a professional coding context, usually wouldn't even count as cheating). The LLMs also let you do this much faster.
You’d also have to make sure they were using clean computers without any local models installed, and coding models are pretty easy to run without high end GPUs.
I don’t know if I would consider this modern coding using modern coding tool chains anymore though. It might still be a nice competition to run, but it is increasingly displaced from what it means to be a professional programmer (I’ve stopped caring about these competition wins on resumes a long time ago).
"The decision to cancel the release of this year’s results will weigh most on Grade 12 students who won’t get a chance to do it again next year, Shin said. But he didn’t think the onus was on the university. “It’s obviously a cheater’s fault, right? They’re the ones who cheated. They’re the one to break the rules. It’s their fault morally and logically.”
... I guess so, but how it is fault of someone who did not cheat? They are lumped in together, like it's their fault too.
Not that I don't get that, but still.
I would think the problem is that they're highly confident a good number of students cheated, but it's harder to decide on an individual basis.
E.g. if half the students' submissions had an 80% chance of being AI-generated then something is very fishy in the population - perhaps a 99% chance a quarter of the students cheated - but 80% is not really high enough to punish an individual, especially since AI detectors aren't very reliable.
For ACM and IEEE programming contests they used to be conducted "offline". Not sure how they're done these days, but participants would have a computer connected to a LAN and shared resources for themselves and their team (one or more computers depending on the particular format). Put students in a room with a disconnected computer and an IDE or a computer connected only to a LAN. Submissions can be handled either with a local contest server or sneaker net to the one networked computer managed by the proctor.
It's also possible to severely lock down the network connectivity of the computers so they cannot access the internet at large but only the contest servers during the competition if there is an online submission required.
As a notorious bastard I used to run C# code tests by giving someone a flattened windows 7 with the .net SDK installed and nothing else. Also no network connection. How the potential engineer dealt with it was interesting.
We had a couple of people shrug it off and do it in notepad and csc quite happily. They were of course hired.
That is a thing in the ACM-ICPC. Typically competitors would come in with printouts of times they've implemented a tricky algorithm like the Hungarian Algorithm or the Blossom Algorithm in the past, so that having it would help jog their memory and let them be constrained only by typing speed once they've figured out how to adapt it to their problem.
Which is in any case a better test of the core of coding, which is about figuring out algorithms and data structures, not writing glue code to connect various APIs together.
This challenge, as described from some of the linked pages, is exactly a challenge on algorithms and data structures not on gluing APIs together. The problem with that is that it's exactly the kind of challenge that LLMs do well at because there's a glut of material to train on.
The challenge is figuring out how to provide a novice (high school students) with a skill appropriate challenge to assess them on that is also not trivially solved by the use of an LLM, and you likely can't. Or you go the IT route and restrict what's available on and from the computers (see my comment about how ACM and IEEE contests used to do things, they may still run that way but I'm uninvolved so don't know).
It's a solved problem. Look at medical schools, we're not pumping out chatgpt doctors because we have actual in-person test proctoring that includes checking biometrics, taking photographs of the individual for posterity, oral examinations, etc.
Right now educational institutions are facing budget shortfalls and responding by expanding remote education opportunities while compromising the quality of education not just because of cheating but because often the content and social circumstances are inferior. We are making short term decisions as an easy-fix to budget issues that will have substantial long-term consequences.
The result is that the value of an American college degree is plummeting (ironically while costs still are increasing). This is part of the reason GenZ are starting to revolt against various institutions - they are the ones being screwed the most.
The right path forward is acknowledging this insanity in the budgeting process, employment process, and loan markets. We won't do that though because it means admitting how much has been fucked up - institutions will double down and triple down in the face of DOGE threats.
> The result is that the value of an American college degree is plummeting
This Canadian university run high school coding contest isn't part of an American college degree. In fact it has no relation to any of that description. It isn't American. It isn't relating to a college at all by the Canadian understanding of that word (we distinguish between college and university, and Waterloo is the latter). It isn't for university (or college) students or related to a university (or college) degree.
> Look at medical schools, we're not pumping out chatgpt doctors because we have actual in-person test proctoring that includes checking biometrics, taking photographs of the individual for posterity, oral examinations, etc.
Extracurricular contests for high school students do not have the same funding or standards as medical exams to become a doctor for obvious reasons (in Canada, where I participated in these, but presumably anywhere).
> chatgpt doctors because we have actual in-person test proctoring that includes checking biometrics, taking photographs of the individual for posterity, oral examinations, etc
Yep. Related, but the software engineering industry did actually have (for a brief time) a PE Exam but it was sadly discontinued.
Legally yes, but if you’d had to interact with large numbers of those entering university for the first time you’d know they very much are not mentally adults yet.
All of which is beside the point since this conversation is focusing on the most prestigious universities around. Most universities and colleges in North America don’t need such exams since they already accept basically anyone who applies.
I still don't understand why taking an in-person exam is considered some difficult ordeal that we cannot expect younger people to endure. I have no idea what culture you come from that this is even a talking point, let alone a debate. It has never occurred to me ever in any context that having an exam being proctored means I need to be mature? I took many hours-long proctored exams long before I was 18. What does being mature have to do with having a literal adult in the room? If anything it requires less maturity because you're requiring oversight.
I’m not arguing against proctored exams I’m arguing against:
> proctoring that includes checking biometrics, taking photographs of the individual for posterity, oral examinations, etc.
While I personally do actually believe test like that would be better than the kinds of exams being taken by students while at university I don’t see it as a realistic option for entrance exams. That sort of thing is simply not part of North American educational culture to a large degree. And for medical schools people practice and study and drill for months in order to prepare themselves for it.
Also, for most colleges and universities it simply makes no sense to introduce such an exam (or any at all) since they accept everyone who applies; they need to in order to remain financially viable.
Ah, now I see your point. I suspect the culture will change with the times assuming people want to give any credibility to universities moving forward. I remember how rampant cheating was where I went (decades ago) that I already put very little weight into degrees.
I hope so. Though the economic realities often take precedence. The university of Waterloo is one of their most prestigious universities, so they might be able to create more rigorous entry requirements (though that does bring up the question of “are you really teaching students to be competent or simply pre-filtering for only competent students”). But they can only do this if it doesn’t impact their ability to stay solvent.
I actually don't believe that CCC gives you significant benefit to admission in the faculty of math compared to other factors like ECs and actual Average.
Sue the university on the grounds that this was a reasonably foreseeable event with proven solutions, i.e. properly proctored, in-person testing. "We're too cheap to do it" isn't an excuse.
The competitions are very useful both in terms of providing students the ability to demonstrate ability that isn't visible within a school curriculum (these students are way more advanced) and motivating the students with something to work towards & train on.
I have done this competition (I think) and what your suggesting just is not reasonable. This is a distributed competition administered locally on a volunteer basis. This isn't some formalized test & trying to get Waterloo to pay for all the local proctors would result in this competition shutting down which would be net not good. I don't have insight into how the competition has changed in the 20 years since I took it so I don't know why they don't have local proctors or lost faith in them. Maybe COVID-era adaptations?
Honestly, I think the better solution is to adjust the difficulty to be AI-proof but it may be hard to do so at this level of academia.
If it can't work as is, and it can't work in a way that removes cheating (which the university feels invalidates the results for all participants) then it can't work. Getting a load of students to prepare and participate only to pull the rug out from under them isn't acceptable. That's a waste of their limited time and energy during a critical period of their lives.
You're describing one small part of the family of damages (in the legal sense), and that isn't how common law works. To draw from the rich vocabulary of HN, think of the opportunity cost imposed by the time wasted on preparation and the test itself, that also has value beyond the entrance fee.
Putting that aside, you don't just sue for money, you can sue for different outcomes and injunctive relief. For example you could sue to have the results released as planned.
Under common law courts don't award speculative damages - i.e. opportunity cost. If you had concrete damages, like you bought a computer just to participate in this contest, you might theoretically have a claim for the cost of the computer... that's simply not realistically the case though.
Injunctive relief for not publishing known to be incorrect test rankings? Not likely.
A FOI request might get you access to the rankings if you really cared... I'm not quite sure what the contours of FIPPA (roughly equivalent to America's FOIA) is with regards to our universities but I believe it generally applies to them. At the same time it could easily fall within privacy limitations.
"Why are you so obsessed with suppressing the truth?"
What? What I said is that I feel for the innocent students there, not that there are no cheaters? Please, don't put words into my mouth that I have not said.
Have you tried using code generated by an LLM? It rarely runs at all until you fix it yourself.
It reminds me of how ezines would publish source codes to run exploits with slight errors so script kiddies couldn’t run them if they didn’t know how to fix them.
We’re talking about coding competitions here. These models have been fed the entirety of ACM, SPOJ, LeetCode, and whatever other competitions they can get their hands on. They are good at constrained coding competition tasks, and certainly strong enough to place highly in a competition meant for high school students.
> Have you tried using code generated by an LLM? It rarely runs at all until you fix it yourself.
In College Humor's Brennan's exasperated tone:
THEN WHAT ARE WE DOING!?
Like Jesus fuck. I feel insane. We scraped the internet, broke tons of trust in our community, fed tons of code we didn't ask for into an industrial shredder, and worked out nonsense generators that can make awful code that barely, and apparently sometimes just doesn't work, burning enough electricity to power several small countries in the process and lighting billions of dollars on fire.
What, and I can't stress this enough, the fuck are we doing anymore. I swear to God the entire valley needs to be pushed into the ocean and humanity will lurch forward 200 years.
It seems that these tests had a rule about not using generative AI from the start. So it would be more like someone entering a fingerpainting contest and using a paint brush.
EDIT: Oh, I misread, that was the USACO competition which had the explicit rule.
Bow & Arrows are still in the literal Olympics; there is clearly appetite for seeing what humans can do with less powerful tools. Actually the fact that there are races on foot, bike, and car is probably an interesting analogy; I for one would be a fan of having a spread of contests from "you may use only pen and paper" to "literally any tool you can use is fair game".
The former is a measure of a real skill. I could prompt my sister about how to drive a car but they wouldn't let me send her on a driving test and then give me a license.
I am genuinely curious why people like this are still making this argument long past the point where anyone who is even slightly informed realizes it's garbage.
The first stage of the CCC is written in pretty heterogeneous conditions. There are several thousand competitors across the world. Students use a language of their choice and their work is graded by their teacher against a rubric sent out by the contest organisers. I recall that the teacher has to send the work to Waterloo, postmarked within a couple of days after the contest.
Sometime in the summer, the University of Waterloo then flies out and puts up the top 15 or 20 students in that more informal contest to compete in what was the "CCC Stage 2" and is now the "Canadian Computing Olympiad." This second contest is done under more controlled conditions and its results largely determine which four students will represent Canada at the International Olympiad in Informatics.
The first stage has always relied on teachers' honesty. The contest organisers mail contest packets to teachers in advance of the contest. Teachers have some time between contest day and when the results need to be mailed back to Waterloo. Students are comparing notes and discussing the contest immediately after the contest.
Nonetheless, in my experience (more than 15 years ago), all or almost all of the students who make it to the second stage deserved to be there. I hope that continues to be true.
[1]: https://www.washingtonpost.com/technology/2023/08/14/prove-f... (archived/paywall bypass) https://archive.is/BoVlG
The first stage (this one, with ballpark 10000 competitors) is distributed and done in very heterogeneous environments, and it relies on local high school teachers taking time out of their day to set up whatever's needed, proctor the contest, and grade the results.
"could have", yes, but mostly didn't until LLMs showed up.
The problem is that LLMs lower the bar to cheating: using Stack Overflow still requires a pretty solid knowledge of the domain (and in a professional coding context, usually wouldn't even count as cheating). The LLMs also let you do this much faster.
I don’t know if I would consider this modern coding using modern coding tool chains anymore though. It might still be a nice competition to run, but it is increasingly displaced from what it means to be a professional programmer (I’ve stopped caring about these competition wins on resumes a long time ago).
Outside of being highly ironic, this is how you get a GAN arms race :)
... I guess so, but how it is fault of someone who did not cheat? They are lumped in together, like it's their fault too. Not that I don't get that, but still.
E.g. if half the students' submissions had an 80% chance of being AI-generated then something is very fishy in the population - perhaps a 99% chance a quarter of the students cheated - but 80% is not really high enough to punish an individual, especially since AI detectors aren't very reliable.
It's also possible to severely lock down the network connectivity of the computers so they cannot access the internet at large but only the contest servers during the competition if there is an online submission required.
There's no solution at this point for this year.
Return to tradition[1].
[1] https://www.ibm.com/history/punched-card
We had a couple of people shrug it off and do it in notepad and csc quite happily. They were of course hired.
It was referred to as "book code."
The challenge is figuring out how to provide a novice (high school students) with a skill appropriate challenge to assess them on that is also not trivially solved by the use of an LLM, and you likely can't. Or you go the IT route and restrict what's available on and from the computers (see my comment about how ACM and IEEE contests used to do things, they may still run that way but I'm uninvolved so don't know).
I love this comment because in 20 years coding with no AI will be "coding like it's 2020 again".
Right now educational institutions are facing budget shortfalls and responding by expanding remote education opportunities while compromising the quality of education not just because of cheating but because often the content and social circumstances are inferior. We are making short term decisions as an easy-fix to budget issues that will have substantial long-term consequences.
The result is that the value of an American college degree is plummeting (ironically while costs still are increasing). This is part of the reason GenZ are starting to revolt against various institutions - they are the ones being screwed the most.
The right path forward is acknowledging this insanity in the budgeting process, employment process, and loan markets. We won't do that though because it means admitting how much has been fucked up - institutions will double down and triple down in the face of DOGE threats.
This Canadian university run high school coding contest isn't part of an American college degree. In fact it has no relation to any of that description. It isn't American. It isn't relating to a college at all by the Canadian understanding of that word (we distinguish between college and university, and Waterloo is the latter). It isn't for university (or college) students or related to a university (or college) degree.
> Look at medical schools, we're not pumping out chatgpt doctors because we have actual in-person test proctoring that includes checking biometrics, taking photographs of the individual for posterity, oral examinations, etc.
Extracurricular contests for high school students do not have the same funding or standards as medical exams to become a doctor for obvious reasons (in Canada, where I participated in these, but presumably anywhere).
Yep. Related, but the software engineering industry did actually have (for a brief time) a PE Exam but it was sadly discontinued.
https://ncees.org/ncees-discontinuing-pe-software-engineerin...
There's still a PE Electrical and Computer: Computer Engineering one but its definitely more focused on hardware.
https://ncees.org/exams/pe-exam/electrical-and-computer
Though most software developers actually go through CS not Software Engineering and that doesn't get you the PE designation.
https://www.peo.on.ca/apply/become-professional-engineer/tec...
Medical schools don’t take in 18 year olds in North America. They can have more in depth entrance exams since the people come in are more mature.
> This is the reason GenZ
Or perhaps it may be the misinformation and rhetoric being used to inflame tried and true anti-intellectual sentiments.
Why do you need to be more mature to take an exam? We're not talking about toddlers - many (most?) of them are legal adults.
All of which is beside the point since this conversation is focusing on the most prestigious universities around. Most universities and colleges in North America don’t need such exams since they already accept basically anyone who applies.
I truly do not understand your point at all.
> proctoring that includes checking biometrics, taking photographs of the individual for posterity, oral examinations, etc.
While I personally do actually believe test like that would be better than the kinds of exams being taken by students while at university I don’t see it as a realistic option for entrance exams. That sort of thing is simply not part of North American educational culture to a large degree. And for medical schools people practice and study and drill for months in order to prepare themselves for it.
Also, for most colleges and universities it simply makes no sense to introduce such an exam (or any at all) since they accept everyone who applies; they need to in order to remain financially viable.
This isn't even an entrance exam. It's an extracurricular event for high school students that people primarily write for fun...
> since they accept everyone who applies;
This on the other hand doesn't describe Waterloo CS at all.
A) I know this. B) If it gives you a significant benefit to admissions, it will quickly become a metric to be gamed.
> doesn’t describe Waterloo CS
Sure and I’ve said as much in another comment, but the person I replied to was making a broad comment about higher education in general.
I have done this competition (I think) and what your suggesting just is not reasonable. This is a distributed competition administered locally on a volunteer basis. This isn't some formalized test & trying to get Waterloo to pay for all the local proctors would result in this competition shutting down which would be net not good. I don't have insight into how the competition has changed in the 20 years since I took it so I don't know why they don't have local proctors or lost faith in them. Maybe COVID-era adaptations?
Honestly, I think the better solution is to adjust the difficulty to be AI-proof but it may be hard to do so at this level of academia.
Putting that aside, you don't just sue for money, you can sue for different outcomes and injunctive relief. For example you could sue to have the results released as planned.
Injunctive relief for not publishing known to be incorrect test rankings? Not likely.
A FOI request might get you access to the rankings if you really cared... I'm not quite sure what the contours of FIPPA (roughly equivalent to America's FOIA) is with regards to our universities but I believe it generally applies to them. At the same time it could easily fall within privacy limitations.
What? What I said is that I feel for the innocent students there, not that there are no cheaters? Please, don't put words into my mouth that I have not said.
It reminds me of how ezines would publish source codes to run exploits with slight errors so script kiddies couldn’t run them if they didn’t know how to fix them.
In College Humor's Brennan's exasperated tone:
THEN WHAT ARE WE DOING!?
Like Jesus fuck. I feel insane. We scraped the internet, broke tons of trust in our community, fed tons of code we didn't ask for into an industrial shredder, and worked out nonsense generators that can make awful code that barely, and apparently sometimes just doesn't work, burning enough electricity to power several small countries in the process and lighting billions of dollars on fire.
What, and I can't stress this enough, the fuck are we doing anymore. I swear to God the entire valley needs to be pushed into the ocean and humanity will lurch forward 200 years.
EDIT: Oh, I misread, that was the USACO competition which had the explicit rule.
https://cccgrader.com/rules.pdf
It was against the rules.
This is some "the earth is flat" shit in 2025.
So, really, they're screwing over more than two groups of people at the same time. I guess that is some kind of progress.