[ad_1]
Research impact has become a very valuable commodity these days. And while that definition may be a bit difficult to pin down, research that advances current knowledge theoretically while improving professional practice or informing the policy process is on the mark. It seems that.
Achieving these results will largely depend on the ability to publish papers in a timely manner when they are most relevant. However, such timeliness is undermined by current research overproduction, review backlogs, and lack of transparency in editorial decision-making. On top of that, researchers waste countless hours preparing submissions that don’t pass review, deal with unnecessary formatting and style pre-review back and forth, and sometimes long waits for publication after acceptance. It happens. It will see the light of day in time to make an impact in the real world, at least in non-science subjects that employ preprint servers.
Take a look at my recent article on media framing of Syrian and Ukrainian refugees. This has already been in the review process for over a year. The paper has been submitted to many journals, some of which took up to two months to explain that they had to reject “even good submissions” on their desks due to backlogs. , and in any case no guidance was given as to how otherwise excellent manuscripts would be submitted for peer review.
Even when my article was finally accepted for review, it took two months for the editor to find a reviewer, despite my further urging. In the meantime, a new war has begun in Gaza and the media’s framing of the Ukraine conflict has changed, potentially undermining the paper’s relevance.
This disappointment over lateness is compounded by concerns about who is reading and editing our manuscripts. Although journals often tout the importance of editorial independence, the flip side of this is a lack of transparency that makes it difficult to trust the system. Quality assurance in peer review assumes that editors are consistently neutral and that manuscripts are judged solely on the merits of the manuscript, but this is not the case.
Consider the following: Late last year, my manuscript came back with reviewers’ comments, asking me to restructure and resubmit it because my manuscript “has the potential to cover new intellectual territory” about the NHS. . I repeatedly asked for clarification on some of the comments, but received no response for several weeks until I received notice that he had only two weeks to resubmit.
I tried to do so the day before the deadline, but the portal was already closed. So I created a new post with a cover letter that added this context. The editor responded to the email within minutes, but the rejection decision is final and will not send the reworked paper to reviewers or count it as a new submission. he told me.
No further evidence was provided. I managed to survive those now wasted months by not submitting my manuscript to anyone else, but what I received was more than an apology. If basic professional and contractual standards are not observed between editors and authors, it is easy to imagine that professional favoritism or personal connections could become a deciding factor.
And editorial delays and opacity aren’t the only things that prevent research from impacting. Another big issue is replicability. Without this, publications have no epistemological authority or relevance to the real world. As Karl Popper argued in his 1959, “A single event that cannot be reproduced means nothing to science.” However, while impact is highly valued rhetorically, researchers have a strong desire to reproduce, as opposed to the kind of novelty that gets published in top journals, receives high citations, and secures external funding. They are given little incentive to prioritize their sexuality.
This is also the case with the UK’s Research Excellence Framework (REF), where billions of pounds of research funding are allocated. Although REF assigns a 25 per cent weighting to impact, its score is still primarily based on “internationally outstanding” (3*) or “world leading” in terms of originality and rigor. ” (4*). This idiosyncratic theoretical and/or methodological nature is often irreproducible.
Moreover, in many social science fields, impacts start locally, and locally focused papers generally score lower. The same goes for papers reporting negative results, which are published so infrequently that they distort the evidence on the social problem in question.
I propose that much research in the health and social sciences be halted until timely dissemination and reproducibility are addressed. Instead, systematic reviews and meta-analyses should be prioritized to identify what has been studied to date. Implications for communities, policy, and practice. and recurring knowledge gaps. This will enable us to understand what research is needed to address these gaps and collaborate with practitioners and policy makers to investigate real-world problems.
Admittedly, these reviews are not without flaws, but they provide a start in understanding the scale of knowledge gaps across disciplines and the nature of irreproducible research. The journal will function similarly to temporary special issues, with certain requirements that papers must meet, such as the incorporation of systematic reviews and meta-analyses to foster productivity and collaboration rather than unhealthy competition. You can set parameters.
Reviewers will ultimately have a financial incentive to help restart the peer review process by prioritizing and addressing the backlog of unnecessary research. My hope is that this will have real impact, built on timely dissemination, reproducible research, and editorial oversight to ensure acceptance decisions are meritocratic and rubric-based. It is to usher in the coming of an era.
Mellon Wondemagen is a Senior Lecturer in Criminology at the University of Hull.
[ad_2]
Source link