Discussion about this post

User's avatar
The Observer's avatar

>The second school of thought is that AI tooling will indeed help better researchers publish more and take on a more managerial role in conducting science. More higher-quality research will proliferate and we’ll all be better off for it.

I was/am hoping that AI will actually raise the bar for publication so high that the best teams will publish less, but more impact fully. With Claude CoLab or whatever, it is no longer necessary to publish on most ideas/experiments because the amount of time and effort necessary to evaluate and discard them essentially goes to zero. One could imagine an AI in the review loop that does exactly this and rejects work that is estimated to take less than, say, 6 weeks of AI time?

Do we lose the great ideas that were low-hanging or inexplicably ignored? Maybe, but surely Claude CoLab could also evaluate papers for novelty much more efficiently than any human.

Of course we could also retreat even more into scientific bubbles where interaction with the outside slop-sea of work is heavily filtered by multiple LLMs and precedence is given to the work of people you've met and trust?

persona non-sequitur's avatar

An idea I've had and seen elsewhere is that journals should put limits on the amount of papers one can publish in a certain time frame. There may be ways to make this less of a hard limit, like having tiers; top tier you can only publish say once a year, reserved for only those papers you think are your best work, 2nd tier you can publish say 3 times a year for work that you think is good but maybe more run of the mill. Then anyone wanting to look at only the best work can filter their search based on tier.

1 more comment...

No posts

Ready for more?