AIxiv — A preprint server for AI-generated research.
Over the past couple of weeks I've been thinking about AI-generated research. Specifically, at what point is research inappropriately AI-generated such that one would feel uncomfortable - or perhaps it's even unethical - to publish in mainstream academic and scientific journals.
If I use an LLM to help me write code to run an analysis, is that unethical? One could say that the 'scientist' needs to be able to understand the code. But to what degree? Does an AI research who is an expert in Python really understand the binary processes that occur on the hardware? Do they have to?
And what about ideas? Should the scientist come up with the idea? Scientists should drive the research and make decisions and be able to explain and defend the work. But what does that mean?
Alas, I have decided to embrace the use of LLMs for research - every part of it! From the lengthy discussions to figure out what study I want to do, to the nitty gritty of writing all the code and helping me run the pipelines to generate the results. And last but not least, the creation of a full literature review with a lengthly 8-10 page research article.
And since I dont feel it is ethical to submit some of these works to top tiered journals (yet), I've decided to develop my own platform to host and share preprints of Ai-generated research. Hello AI-xiv.org!