Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is all so crazy to me.

I went to school long before LLMs were even a Google Engineer's brianfart for the transformer paper and the way I took exams was already AI proof.

Everything hand written in pen in a proctored gymnasium. No open books. No computers or smart phones, especially ones connected to the internet. Just a department sanctioned calculator for math classes.

I wrote assembly and C++ code by hand, and it was expected to compile. No, I never got a chance to try to compile it myself before submitting it for grading. I had three hours to do the exam. Full stop. If there was a whiff of cheating, you were expelled. Do not pass go. Do not collect $200.

Cohorts for programs with a thousand initial students had less than 10 graduates. This was the norm.

You were expected to learn the gd material. The university thanks you for your donation.

I feel like i'm taking crazy pills when I read things about trying to "adapt" to AI. We already had the solution.



> Cohorts for programs with a thousand initial students had less than 10 graduates. This was the norm.

And why is this a flex exactly? Almost sounds like fraud. Get sold on how you'll be taught well and become successful. Pay. Then be sent through an experience that filters so severely, only 1% of people pass. Receive 100% of the blame when you inevitably fail. Repeat for the other 990 students. The "university thanks you for your donation" slogan doesn't sound too hot all of a sudden.

It's like some malicious compliance take on both teaching and studying. Which shouldn't even be surprising, considering the circumstances of the professors e.g. where I studied, as well as the students'.

Mind you, I was (for some classes) tested the same way. People still cheated, and grading stringency varied. People still also forgot everything shortly after wrapping up their finals on the given subjects and moved on. People also memorized questions and compiled a solutions book, and then handed them down to next year's class. Because this method does jack against that on its own. You still need to keep crafting novel questions, vary them more than just by swapping key values, etc.


If teaching is the goal, a 99% failure rate seems counterproductive.


I'd wager the "Cohorts for programs with a thousand initial students had less than 10 graduates" statement is deceptive, if not outright false.

Perhaps lifetimerubyist means "1000 students took the mandatory philosophy and ethics 101 class, but only 10 graduated as philosophy majors"


I believe certain european countries have or had free universities which instead filter students with incredibly difficult courses. Thousands might enter because both tuition and board are free and they would like a degree, but the university ensures that only a small group make it to second year. I believe the filtering is less intense in later years, since the job has already been done by that point.


Unless you're thinking of huge online courses like Udacity/Coursera, I don't think that's really a thing?

If it is, I'd be fascinated to learn more.

I mean, the logistics would be pretty wild - even a large university's largest lecture theatres might only have 500 seats. And they'd only have one or two that large. It'd be expensive as hell to build a university that could handle multiple subjects each admitting over a thousand students.


At least in Belgium it's quite common for a lot of students to fail the first year (partly due to the difficulty, partly due to partying instead of studying). But it's not like it's really free, the tuition is cheap but the accomodation is expensive. I also don't think it's particularly difficult on purpose to filter out students, it's just that it's not overly expensive and a lot of people are unsure about what to study.


According to [1] at one Belgian university 61.8% of students reached a milestone within 2 years (with 41.4% reaching it within 1 year)

That's quite a high non-completion rate - but it's nowhere near 99%.

[1] https://nieuws.kuleuven.be/en/content/2023/42-6-of-new-stude...


> And why is this a flex exactly? Almost sounds like fraud.

Do you think you're just purchasing a diploma? Or do you think you're purchasing the opportunity to gain an education and potential certification that you received said education?

It's entirely possible that the University stunk at teaching 99% of it's students (about as equally possible that 99% of the students stunk at learning), but "fraud" is absolute nonsense. You're not entitled to a diploma if you fail to learn the material well enough to earn it.


If you have a <1% pass rate from beginning to end, then that strongly suggests that your admissions criteria is intentionally low enough to admit students that are unprepared for the program so that you can take their money.

You could easily raise the bar without sacrificing quality of education (and likely you'd improve it just from the improvement in student:teacher ratio).


Exactly that. Also, I experienced a situation where a free uni (eastern Europe) had low admission criteria and then had a "cleaning" math course, which 80%-90% failed. School still got paid for the number of students admitted, not those who passed.

In another European country, schools get paid for students that passed.


I don't think one applies to university expecting they're purchasing themselves a diploma, nor that they should be magically absolved of putting in effort to learn the material. What I do think is that the place they describe sounds an awful lot like people being set up for failure though, and so that begged the question as to why that might be. I should probably clarify that I wasn't particularly serious about my fraud suggestion however (was just a bit of a jab rather), as that doesn't seem to have made it through.

If teaching was so simple that you could just tell people to go RTFM then recite it from memory, I don't know why people are bothering with pedagogy at all. It'd seem that there's more to teaching and learning than the bare minimum, and that both parties are culpable. Doesn't sound like you disagree on that either.

> you're purchasing the opportunity to

We can swap out fraud for gambling if you like :) Sounds like an even closer analogy now that you mention!

Jokes aside though, isn't it a gamble? You gamble with yourself that you can [grow to] endure and succeed or drop out / something worse. The stake is the tuition, the prize is the diploma.

Now of course, tuition is per semester (here at least, dunno elsewhere), so it's reasonable to argue that the financial investment is not quite in such jeopardy as I painted it. Not sure about the emotional investment though.

Consider the Chinese Gaokao exam, especially in its infamous historical context between the 70s and 90s. The number of available seats was way lower than the number of applications [0]. The exams grueling. What do you reckon, was it the people's fault for not winning an essentially unspoken lottery? Who do you think received the blame? According to a cursory search, the individual and their families (wasn't there, cannot know) received the blame. And no, I don't think in such a tortured scheme it is the students' fault for not making the bar.

If there are fewer seats than what there is demand for, then that's overbooking, and you the test authoring / conducting authority are biased to artificially induce test failures. It is no longer a fair assessment, nor a fair dynamic. Conversely, passing is no longer an honest signal of qualification. Or rather, not passing is no longer an honest signal of unqualification. And this doesn't have to come from a single test, it can be implemented structurally too, so that you shed people along the way. Which is what I'm actually alluding to.

[0] ~4.8%, so ~95% of people failed it by design: https://en.wikipedia.org/wiki/Class_of_1977%E2%80%931978_%28...


> If teaching was so simple that you could just tell people to go RTFM then recite it from memory, I don't know why people are bothering with pedagogy at all. It'd seem that there's more to teaching and learning than the bare minimum, and that both parties are culpable. Doesn't sound like you disagree on that either.

I do not! A situation where roughly 1% of the class is passing suggests that some part of the student group is failing, and also that there is likely a class design issue or a failure to appropriately vet incoming students for preparedness (among, probably, numerous other things I'm not smart enough to come up with).

And I did take issue with the "fraud" framing; apologies for not catching your tone! I think there is a chronic issue of students thinking they deserve good grades, or deserve a diploma simply for showing up, in social media and I probably read that into your comment where I shouldn't have.

> Jokes aside though, isn't it a gamble?

Not at all. If you learn the material, you pass and get a diploma. This is no more a gamble than your paycheck. However, I think that also presumes that the university accepts only students it believes are capable of passing it's courses. If you believe universities are over-accepting students (and I think the evidence says they frequently are not, in an effort to look like luxury brands, though I don't have a cite at hand), then I can see thinking the gambling analogy is correct.


> I think there is a chronic issue of students thinking they deserve good grades, or deserve a diploma simply for showing up, in social media and I probably read that into your comment where I shouldn't have.

Yeah, that's fine, I can definitely appreciate that angle too.

As you can probably surmise, I've had quite some struggles during my college years specifically, hence my angle of concern. It used to be the other way around, I was doing very well prior to college, and would always find people's complaints to be just excuses. But then stuff happened, and I was never really the same. The rest followed.

My personal sob story aside, what I've come to find is that while yes, a lot of the things slackers say are cheap excuses or appeals to fringe edge-cases, some are surprisingly valid. For example, if this aforementioned 99% attrition rate is real, that is very very suspect. Worse still though, I'd find things that people weren't talking about, but were even more problematic. I'll have to unfortunately keep that to myself though for privacy reasons [0] [1].

Regarding grading, I find grade inflation very concerning, and I don't really see a way out. What affects me at this point though is certifications, and the same issue is kind of present there as well. I have a few colleagues who are AWS Certified xyz Engineers for example, but would stare at the AWS Management Console like a deer in the headlights, and would ask exceedingly stupid questions. The "fee extraction" practice wouldn't be too unfamiliar for the certification industry either - although that one doesn't bother me much, since I don't have to pay for these out of my own pocket, thankfully.

> If you learn the material, you pass and get a diploma. This is no more a gamble than your paycheck

I'd like to push back on this just a little bit. I'm sure it depends on where one lives, but here you either get your diploma or tough luck. There are no partial credentials. So while you can drop out (or just temporarily suspend your studies) at the end of semester, there's still stuff on the line. Not so much with a paycheck. I guess maybe a promotion is a closer analog, depending on how a given company does it (vibes vs something structured). This is further compounded by the social narrative, that if you don't get a degree then xyz, which is also not present for one's next monthly paycheck.

[0] What I guess I can mention is that I generally found the usual cycle of study season -> exam season to be very counter-productive. In general, all these "building up hype and then releasing it all at once" type situations were extremely taxing, and not for the right reasons. I think it's pretty agreeable at least that these do not result in good knowledge retention, do not inspire healthy student engagement, nor are actually necessary. Maybe this is not even a thing in better places, I don't know.

[1] I have absolutely no training in psychology or pedagogy, so take this with a mountain of salt, but I've found that people can be not just uninterested in learning, but grow downright hostile to it, often against their own self-recognized best interests. I've experienced it on myself, as well as seen it with others. It can be very difficult to snap someone out of such a state, and I have a lingering suspicion that it kind of forms a pipeline, with the lack of interest preceding it. I'm not sure that training and evaluating people in such a state results in a reasonable assessment, not for them, nor for the course they're taking.


In the modern era, you are purchasing a diploma. I witnessed dozens of students blatantly cheat without any consequence. We all got the same degree.

Colleges exist to collect tuition, especially from international students who pay more. Teaching anything at all, or punishing cheating, just isn’t that important.


I basically agree with the thrust of what you're saying, but also:

> I wrote assembly and C++ code by hand, and it was expected to compile. No, I never got a chance to try to compile it myself before submitting it for grading.

Do you, like, really think this is the best way to assess someone's ability? Can't we find a place between the two extremes?

Personally, I'd go with a school-provided computer with a development environment and access to documentation. No LLMs, except maybe (but probably not) for very high-level courses.


The safe middle space still does not involve a computer

Lots of my tests involved writing pseudocode, or "Just write something that looks like C or Java". Don't miss the semicolon at the end of the line, but if you write "System.print()" rather than "System.out.printLn()" you might lose a single point. Maybe.

If there were specific functions you need to call, it would have a man page or similar on the test itself, or it would be the actual topic under test.

I hand wrote a bunch of SQL queries. Hand wrote code for my Systems Programming class that involved pointers. I'm not even good with pointers. I hand wrote Java for job interviews.

It's pretty rare that you need to actually test someone can memorize syntax, that's like the entire point of modern development environments.

But if you are completely unable to function without one, you might not know as much as you would hope.

The first algorithms came before the first programming languages.

Sure, it means you need to be able to run the code in your head and be able to mentally "debug" it, but that's a feature

If you could not manage these things, you washed out in the CS101 class that nearly every STEM student took. The remaining students were not brilliant, but most of them could write code to solve problems. Then you got classes that could actually teach and test that problem solving itself.

The one class where we built larger apps more akin to actual jobs, that could have been done entirely in the lab with locked down computers if need be, but the professor really didn't care if you wanted to fake the lab work, you still needed to pass the book learning for "Programming Patterns" which people really struggled with and you still needed to be able to give a "Demo" and presentation, and you still needed to demonstrate that you understood how to read some requests from a "Customer" and turn it into features and requirements and UX

Nobody cares about people sabotaging their own education except in programming because no matter how much MBAs insist that all workers are replaceable, they cannot figure out a way to actually evaluate the competency of a programmer without knowing programming. If an engineer doesn't actually understand how to evaluate static stresses on a structure, they are going to have a hard time keeping a job. Meanwhile in the world of programming, hopping around once a year is "normal" somehow, so you can make a lot of money while literally not knowing fizzbuzz. I don't think the problem is actually education.

Computer Science isn't actually about using a laptop.


Maybe the middle space doesn't involve a compiler, but I really think computers should be allowed on tests, for a different reason: the computer makes it possible to write out of order. You can go back and add to the beginning without erasing and rewriting everything.

This applies to prose as much as code. A computer completely changes the experience of writing, for the better.

Yes, obviously people made do with analog writing for hundreds of years, yadda yadda, I still think it's a stupid restriction.


What do you mean? I have been writing out of order in my exams all the time. That’s what asterisks and arrows are for!


To a very limited extent, yes. But you'd need a lot of arrows to replicate what can be done on a computer. The computer completely frees you from worrying about space.


In my CS curriculum we learned SQL in theory only. We learned the relational model, normalization, joins, predicates, aggregation, etc. all without ever touching an actual database. In the exams we wrote queries in a paper "blue book" which was graded by teaching assistants.


I had philosophy class and we'd lose points for spelling mistakes in our essays. (Handwritten, no computer allowed)


What's the crazy to me is you took that as the gold standard for education evaluation.

For comparison we had lengthy sessions in a jailed terminal, week after week, writing C programs covering specific algorithms, compiling and debugging them within these sessions and assistants would follow our progress and check we're getting it. Those not finishing in time get additional sessions.

Last exam was extremely simple and had very little weight in the overall evaluation.

That might not scale as much, but that's definitely what I'd long for, not the Chuck Norris style cram school exam you are drawing us.


I've had colleagues argue (prior to LLMs) that oral exams are superior to paper exams, for diagnosing understanding. I don't know how to validate that statement, but if the assumption is true than there is merit to finding a way to scale them. Not saying this is it, but I wouldn't say that it's fair to just dismiss oral exams entirely.


I think oral exam where you have a student explain and ask questions on a project they did is really good for judging understanding. The ones where you are supposed to memorise the answers to 15 questions where you will have to pick one at random, not as much imo.


Yes, I hate oral exams, but they are definitely better at getting a whole picture of a person's understanding of topics. A lot of specialty boards in medicine do this. To me, the two issues are that it requires an experienced, knowledgeable, and empathetic examiner, who is able to probe the examinee about areas they seem to be struggling in, and paradoxically, its strength is in the fact that it is subjective. The examiner may have set questions, but how the examinee answers the questions and the follow-up questions are what differentiate it from a written exam. If the examiner is just the equivalent of a customer service representative and is strictly following a tree of questions, it loses its value.


Interviews have the same issues. But if you do anything more than read off templated questions like a robot, you can be accused of discrimination.

It is a sad world we live in.


Universities are not just places for students to learn. They are also places where young faculty, grad students and teaching assistants learn to become teachers and mentors. Those are very difficult skills to learn, and slogging through a lot of hands on teaching and mentoring is necessary to learn them. You can't really become a good classroom teacher either without grading your students yourself and figuring out what they learned and didn't.


Seems like the equivalent of claiming white board coding is the best way to evaluate software development candidates. With all the same advantages and disadvantages.


Admitting 1000 students to get 10 graduates means there are morons in admissions doing zero vetting to make sure the students are qualified.


Absolutely not morons. If the goal is to maximize collecting tuition and still have reputation of not being a diploma shop this is the obvious solution. The 20% which survives the first year is worth keeping around to hire them later in the companies which the teaching staff own or collect referral bonuses if working for a multinational.


True, outright fraud is another adequate explanation.


There's either a 0 missing there or something pretty weird at that uni. I think the rest of the comment is very valid if we ignore this point.

My experience is the same except I think ~50% or so graduated[0].

[0]: Disclaimer that my programme was pretty competitive to get into, which is an earlier filter. Statistics looked worse for programmes at similar level with less people applying.


Or that there's morons teaching.


I simply don't believe your university program had a 99% failure rate. Such a university should be shut down and sold for parts.


The example above may have been a bit misleading imo. In some countries the filtering process is put inside the program itself rather than in state wide exams, entrance exams or amount of tuition fees. There is always a filtering process somewhere. Not sure where OP was though.


any private university, yes. I have seen state-supported universities in certain countries with very high failure rates for certain programs (I'm assuming 99% was an exaggeration for something more like "the vast majority failed").


In my state uni 75% was normal a couple decades ago, 50% after first year. 99% is extreme, but I can imagine that being true with uni leadership on board.


TFA's case involved examinations about the student's submitted project work. It's not the same thing. Even for a more traditional examination with no such context attached one might still want to rely on AI for grading. (Yeah, I know, that comes across as "the students are not allowed to use AI for cheating, but the profs are!".)

Also, IMO oral examinations are quite powerful for detecting who is prepared and who isn't. On the down side they also help the extroverts and the confident, and you have to be careful about preventing a bias towards those.


> On the down side they also help the extroverts and the confident, and you have to be careful about preventing a bias towards those.

This is true, but it is also why it is important to get an actual expert to proctor the exam. Having confidence is good and should be a plus, but if you are confident about a point that the examiner knows is completely incorrect, you may possibly put yourself in an inescapable hole, as it will be very difficult to ascertain that you actually know the other parts you were confident (much less unconfident) in.


You could argue that for fields like law, medicine and management extroversion and confidence are important qualities.


Quite.


I'm fairly skeptical of tests that are closed-book. IMO the only reasons to do so are if 1) the goal is to test rote memorization (which is admittedly sometimes valuable, especially depending on the field) or, perhaps more commonly, 2) the test isn't actually hard enough, and the questions don't require as much "synthesis" as they should to test real understanding.


> Cohorts for programs with a thousand initial students had less than 10 graduates. This was the norm.

You have a very weird idea of education if a teaching method that results in a 99% failure rate is seen as good by yourself. Do you imagine a professional turning out work that was 99% suboptimal?


So did I, but a big difference today is the number of students, and how many of them are doing non-traditional programs. Lots and lots of online-only programs, offered through serious universities.

The old ways do not scale well once you pass a certain number of students.


I currently go to school for engineering, and it is the same way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: