How AI is Brutally Abused and May Corrupt Scientific Research
Recently I came across this article on LI. I don't usually visit that SM, but lately, I've had some people contact me there, so I went to the site a few times to check on their messages. At one point, I came across an article that caught my attention. This time I decided to refrain from posting any criticism on it since those AI people can get a bit touchy, especially when they are on a platform they can get away with being brutally impolite. Hence this article here.
So, recently these AI researchers came up with the next best thing in scientific research, or so they say. I'm no researcher (anymore), but I've read my share of scientific articles during my Ph.D. and my post-doc and continue to do so from time to time. I can relate to the struggle of a researcher who has to go through dozens of articles during her research on a topic before she can form her own position on the matter and start writing her own papers. After all, it's either publish or perish in the world of academia, that resembles more and more some dystopian world like that of the "3%" TV show. Still, I can't help but wonder what kind of insight can someone get from a one-sentence summary of a paper. Remember, this is not some article on a tabloid that someone probably wrote while in the bathroom during an episode of constipation. Scientific articles involve a lot of work and meticulous editing (often over several revision cycles), so if you have the privilege of having one on your computer, it means that many people slaved over it to make this possible. I believe it's common courtesy to at least read the damn thing yourself, or at least skim-read it before you draw any conclusions about it.
Dr. Wolfram (the creator of Wolfram Alpha and Mathematica, as well as some novel cosmological model that's as groundbreaking as Einstein's famous works) spoke in length about the topic of irreducible complexity (IC). The latter is a characteristic of any system that's complex enough to require a fairly elaborate description if you are to capture the bulk of its essence. Most systems produced nowadays have a relatively high level of IC, which is why experts are needed to make sense of them. Of course, a journalist may oversimplify them so that the average Joe can get a glimpse of their value (hopefully) or at least an appreciation of the amount of work they entail. However, it's doubtful that the oversimplified version would capture the essence of these systems to an extent that enables a decent understanding of it, particularly if you want to augment these systems. Maybe I'm wrong, but this oversimplified summary of a scientific paper that this AI offers attempts to do the job of the journalist, although instead of some eye-catching brief, it produces a single sentence, which directly violates the IC of the article. In other words, it dumbs it down so much that the summary resembles the original only loosely.
Although there is value in any AI system that's properly developed (otherwise they wouldn't build it in the first place!), the ethics of it may be shaky at times. In this case, I (strongly) believe that it goes against the ethos of the researcher, who as the word implies, is tasked to search repeatedly for something before drawing any conclusions about it. In other words, this technology can be corrosive for scientific research and bring about a generation of shallow researchers, happy to judge a whole paper by the one-sentence summary a computer program will provide them.
What are your thoughts on this? Is it as ominous as I view it, or do you find it benign? Cheers!
Source: pixabay.com · “While binary behaviour is s ...
Source: Semantix Brasil · I generally don't opt fo ...
Non hai gruppi che si adattano alla tua ricerca