I can't help but marvel at the gigantic irony that the leading ML conference cannot solve a relatively simple reviewer matching problem.


The NeurIPS AC paper matching was so devastatingly bad this year that it puts the conference at risk. We’d be better off pumping the breaks & redoing assignments, even if it cost 2 weeks versus going ahead like this. I don’t know a single AC less than crestfallen by their pile.


Sign in to participate in the conversation
Mastodon @

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!