Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This opinion seems totally backwards to me.

I agree.

> I'm not sure what you think peer-reviewed means?

Posting to HN is a form of peer-review, typically far better than the form of "peer-review" coopted by journal publishers.



> Posting to HN is a form of peer-review, typically far better than the form of "peer-review" coopted by journal publishers.

This is a rather self-aggrandizing view, and I think it speaks to the level of ego that underpins a lot of the discussion on here.


There's a lot of junk comments on HN but there's also a lot of junk comments at top conferences like CVPR, ICCV, and NIPS. The system is just noisy. I've had plenty of inane reviews that clearly break reviewer guidelines (ACs do nothing)[0,1].

Also, I want to remind everyone that ML uses conferences as the main publishing mechanism, not journals. While things like JMLR exist, that's not where papers are targeting.

Maybe we just need to let researchers evaluate works based on their merits and not concern ourselves with things like popularity, prestige, and armchair experts' opinions. The latter seems antiscientific to me. We need to recognize that the system is noisy and Goodhart shows us we aren't optimizing merit.

[0] an example is that I had a strong reject with 2 lines of text. One stating that it wasn't novel (no further justification) and the other noting a broken citation link to the appendix. No comments about actual content.

[1] As another example, I've had reviewers all complain because I didn't compare one class of model to another and wasn't beating their performance. I beat the performance of my peers, but different models do different things. Image quality is only one metric. You wouldn't compare PixelCNN to StyleGAN.


> Maybe we just need to let researchers evaluate works based on their merits and not concern ourselves with things like popularity, prestige, and armchair experts' opinions.

Ok, but how would the researchers communicate their evaluation to non-experts? (Or other experts who didn't have the time to validate the paper)

Isn't that exactly what a review is?

My impression is the armchair experts are more likely to be found on HN.


> Ok, but how would the researchers communicate their evaluation to non-experts?

Conferences, journals, and papers are not for non-experts. They are explicitly for experts to communicate with experts. The truth is that papers have never been validated and likely never will. Code often isn't uploaded alongside papers and when it is I know only a handful of people that look at it (including myself) and only one that executes it (and not often). Validation only happens with reproduction (i.e. grad students learning) and funding doesn't encourage that. Despite open source code, lots of ML is still difficult to reproduce, if it can be done at all.

We also use normal forms of communication like Twitter, HN, Reddit, email, etc but there's a lot of noise (as you note). We speak a different language though, so you can often tell.

Frankly, a lot of us are not concerned with explaining our work to layman. It's a lot of work, especially the more complex a subject is and we're already under high pressure to continue researching. It's never good enough. There's no clear "done with work" time in jobs like this. You're always working, so allocate your energy (I'm venting and mentally fatigued right now). I used to be passionate about teaching laymen but I'm tired of arguing with armchair experts. Still happy and passionate about teaching my students and performing research, so that's where I'll spend most of my energy: in the classroom or blogs. The more popular a subject is, the more likely this is to happen too, ironically.

Communication should come from news, university departments, and specialty science communicators, but that's broken down. Honestly, I just think it's a tough time for laymen to get accurate information. There's a lot of good information out there for you all (us researchers learn from publicly available materials) but expertise is being able to distinguish signal from noise, and the greater the popularity, the greater the noise. This isn't just true for ML, we see this in things like climate, nuclear, covid, gender/sexuality, and other hot topics. Only thing you can do is actually use a common strategy from researchers: have high doubt and look for patterns from research groups.


Personally I relish many of the third-string papers that people post on arXiv about run-of-the-mill text analysis projects they do because they give me more insight into the results I'll get and the challenges I'll face when I do my own text analysis projects.

If you go to a computer science conference you might talk about the headliners later but you actually learn a lot from talking to less famous people at the back of the room, scanning large numbers of poster papers, sharing a bottle of wine at dinner with four people and having one of them get way too drunk and talk trash about academics you distantly know, etc.

Lower-quality papers on arXiv give me a bit of that feel.


>This is a rather self-aggrandizing view, and I think it speaks to the level of ego that underpins a lot of the discussion on here.

I'm not so sure about that. I've read a lot of things that should have never left peer review or editing stages, while some of the most impotent papers for my field never left preprint.

Overall I think the most imprortant step of peer review is you as the reader in the field. Peer review should catch the worst offenders out, saving us all some time, but it should never be viewed as a seal of approval. Everything you read should be critically evaluated as if it were a preprint anyway.


I realize some people have taken my comment to be speaking on the efficacy of the peer review process but that was not my intent. I have no experience reading or reviewing papers, or with the journal publication process. My point was more to the fact that HN is a public forum in which anyone can participate and so elevating it above (what I hope are) subject matter experts seemed rather arrogant. To be fair, the OP has since expanded with a more complete comment and it seems to be a similar sentiment to the things you and a couple others have shared.


> is a public forum in which anyone can participate

I don't think "participate" and "leave a comment" are the same thing. A random person most likely wouldn't be able to follow or contribute to the conversation. They could only leave a comment.

It's a bit pedantic, but noise usually sinks to the bottom.


Having been on a paper review board, the selection process is essentially credentialism for credentialism’s sake. Anyone who’s done a paper or two is deemed to be qualified, and as it’s unpaid, uncredited bonus work on top of your day job, the slots aren’t competed for very hard.

I would say the primary difference between a conference peer review board and HN is that the author is obliged to respond to the reviewers on the board. I would not say there’s any particular difference in qualifications.


> Anyone who’s done a paper or two

That already narrows it down greatly compared to the general public you find on the internet.


but not necessarily for the better. :)


Do you think it's factually incorrect that the HN comment section is more likely to find problems which invalidate the conclusions of the paper than the journal-driven peer review process?


Yes?


On reflection, I probably agree that the answer is "yes" to the question as I phrased it. I think that if you take a random paper, the peer reviewers probably do have much more useful feedback than HN would.

However, if you limit the question to "papers which make bold conclusions of the type that generates lots of discussion on HN", I think HN will be more likely to find methodological flaws in those papers than the peer review process would. I think that's mostly because papers are optimized pretty hard not to have any problems which would cause them to be rejected by the peer review process, but not optimized very hard to not have other problems.

Which means, on average, I expect the HN comment section to have more interesting feedback about a paper, given that it's the sort of paper that gets lots of HN discussion, and also given that the author put a lot of effort into anticipating and avoiding the concerns that would come up in the peer review process.

Which, to a reader of HN, looks like "a lot of peer-reviewed papers have obvious flaws that are pointed out by the HN commentariat".

I do think, on the object level, a pre-print which the author intends to publish in a reputable journal will be improved more by fixing any problems pointed out by HN commenters than by fixing any problems pointed out by peer reviewers, and as such I think "post the pre-print on HN and collect feedback before peer review" is still a good step if the goal is to publish the best paper possible.


This is a considerably more thoughtful comment and I appreciate your reflection. I also can see how my initial response was a little broad and over-generalizing. I do think there is an interesting conversion in there about whether a group of technically minded people outside the "in group" of the peer reviewer circle (of whatever paper in question) could offer different and potentially important feedback.

Although I should add I have no background in academia and don't feel prepared to have that discussion.


> "post the pre-print on HN and collect feedback before peer review" is still a good step

It'll cause some journals to not publish your work.


I think that it depends on what journal we are talking about. Most of them have some biases in their processes, just as HN commenters also do.


It would be more charitable and accurate to read it as a statement of the sad state of review at many journals. Plenty are rubber stamps where the most you might expect from reviewers is an insistence to add citations to their own barely relevant papers.


There's no need to attack the entire HN community over one person's opinion. Preprints and discussions here both have value, and different forms of review suit different needs.


This was not an attack against the community or the paper in question. I am only speaking from my experience as (primarily) a lurker.


My apologies, I misinterpreted your comment. You make a fair point that HN discussions are not equivalent to formal peer review.


That's redefining what "peer-review" is. And I'll take credentialism over some board of anonymous internet people, I'm sorry.

I mean, hypothetically, this whole thread could be stuffed with sock puppet accounts of the author. How would you know?


You can check the commenters post history?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: