#researchevaluation — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #researchevaluation, aggregated by home.social.
-
Studying ‘predatory publishing’ in the context of research evaluation: conceptual and methodological challenges
#DimityStephen #EmanuelKulczycki #FedericoVasen #JakubKrzeski #MarilenaDrymioti #MartinReinhart #MetaCramer #MoumitaKoley #PredatoryPublishing #ResearchEvaluation #RitaFari #ScientificCommunication
-
A recent Journal of Informetrics study shows – There is no universal number of “too many authors.”
In some fields, 3–6 may already be unusual.
In medicine – dozens are common.
In physics – large teams are often the norm.:doi: https://doi.org/10.1016/j.joi.2026.101803
Yes, #hyperauthorship can signal problems (e.g., honorary authorship, metric inflation). But the key question is not “how many authors?” 👉 it is: Is this abnormal for this field and time?
-
We evaluate science mostly through papers. But researchers report that up to 75% of project effort is data work — collecting, cleaning, documenting, and preparing datasets. A reminder that research outputs ≠ research work.
New paper in Research Evaluation: https://doi.org/10.1093/reseval/rvag008
#ResponsibleMetrics #OpenScience #DataCitation #ResearchEvaluation
-
Most research evaluation still rewards papers, not the work that makes them possible. Yet researchers say up to 75% of a project can be data work: collecting, cleaning, curating, documenting.
:doi: https://doi.org/10.1093/reseval/rvag008
Maybe it's time to stop pretending that publications alone represent research.
#OpenScience #ResearchEvaluation #DataCitation #ResponsibleMetrics #Scientometrics
-
New paper in Research Evaluation explores how researchers actually cite data. Key insight: data citations are far more complex than simple indicators of data reuse.
:oa: https://doi.org/10.1093/reseval/rvag008
They reflect scientific practice, community norms, attribution, and even reputation-building. A timely reminder: metrics alone cannot capture the real value of data work.
#OpenScience #DataCitation #ResearchEvaluation #ResponsibleMetrics #Scientometrics
-
Back to the roots: reimagining scientific evaluation of research without peer review
#MalikSallam #PeerReview #PublishingStandards #ResearchEvaluation #ResearchIntegrity #ScientificCommunication
-
"Many of the loudest Open Science advocates are deeply embedded in the very systems they critique such as traditional publishing, prestige-driven academia and grant-dependent research cultures. They speak the language of reform while continuing to “play the game” remarkably well. Researchers who sit on advisory boards talk about preprints but then celebrate publishing their latest Nature paper"
https://www.themodernpeer.com/people-the-problem-in-open-science/
#OpenScience #OpenData #ScienceReform #Metascience #ResearchEvaluation #UniversityRankings #PublishOrPerish
-
A new article by Chloe Patton shows how debates about #OpenScience often slip into absurdity – like demanding #replication from the #Humanities. You can’t replicate history, culture, or interpretation the way you replicate a physics experiment. It’s a different kind of knowledge.
:doi: https://doi.org/10.1093/reseval/rvaf052
Forcing STEM-style standards onto the humanities doesn’t improve #science – it just adds bureaucracy and limits academic freedom.
-
Today at my alma mater, I spoke about how research evaluation is quietly shifting from citations to ChatGPT-style predictions.
👉 https://doi.org/10.13140/RG.2.2.30585.12642
AI can already “detect quality” from text alone, and sometimes performs better than classic metrics. But it doesn’t evaluate science: it rewards what sounds like good science. We may be heading from “publish or perish” to the new absurdity: “write ChatGPT-friendly or perish.”
#AI #ChatGPT #ResearchEvaluation #Scientometrics #LLM #OpenScience
-
The recent debate in JoI highlights a key issue often ignored in research evaluation - the impact of document types on citation indicators:
:doi: https://doi.org/10.1016/j.joi.2025.101738
When all publication types are counted, normalized metrics become inconsistent and misleading. But once we restrict the analysis to articles and reviews, correlations rise sharply, and results become robust and reproducible.
#ResearchEvaluation #Bibliometrics #SciencePolicy #Ukraine #Metrics
-
I currently have about a dozen papers under review. Now imagine: a drone hits my window — and who will keep emailing editors and reviewers then? 😅
Stewart Manley published his brilliant idea in #ResearchEvaluation the “exclusive option”. Authors could submit to multiple journals at once, and interested editors request an exclusive right to review.
:doi: https://doi.org/10.1093/reseval/rvaf027
No duplicated #peerreview. No endless delays. This could shake up academic publishing!
-
Honored to receive an Award of Appreciation from the Ministry of Education and Science of Ukraine for my contribution to the evaluation of research projects. Proud to stand with Ukrainian science.
#UkraineScience #ResearchEvaluation #ScienceForUkraine #OpenScience #PeerReview #DistributedPeerReview -
Most people still don't know: publishing in real scientific journals is usually free.
A new study shows how pseudo-journals in 🇮🇳 exploit researchers, offering "cheap" publications for just $25.:doi: https://doi.org/10.1093/reseval/rvaf017
The authors of the #ResearchEvaluation study advise: carefully check open access models and journal fee policies. Education and critical thinking remain our main tools against academic fraud.
#OpenAccess #OpenScience #AcademicIntegrity #PredatoryJournals #OpenJournals
-
A brilliant new paper in #ResearchEvaluation 📄 compares how the UK, Norway, and Poland implemented research impact assessment - with very different results:
🇬🇧 UK built infrastructure, #REF became a strategic tool.
🇳🇴 Norway took a soft, formative approach.
🇵🇱 Poland copy-pasted, spent big - got confusion, “point-chasing” and no culture shift.:doi: https://doi.org/10.1093/reseval/rvaf010
🇺🇦 A lesson Ukraine risks ignoring again.
#ImpactAssessment #SciencePolicy #ResearchImpact #AcademicCulture
-
🇧🇷 vs 🇳🇱 in #ResearchEvaluation? A sharp comparative study shows how Brazil’s high-stakes, performance-based model contrasts with the Netherlands’ strategic, decentralized approach.
:doi: https://doi.org/10.1093/reseval/rvaf013
Takeaway: Evaluation isn’t one-size-fits-all - context matters.
#ResearchAssessment #SciencePolicy #HigherEducation #ResponsibleMetrics #AcademicEvaluation
-
An entertaining, informative and overall really well done video about the h-index and why you shouldn't use it, by @stefhaustein, @carey_mlchen et al.
I really will be sharing this video a lot: "What is the h-index and what are its limitations? Or: Stop using the h-index"
https://www.youtube.com/watch?v=HSf79S3XkJw
#hIndex #bibliometrics #researchEvaluation #researchAssessment #publishOrPerish
-
📢 Otmane Azeroual, Joachim Schoöpfel and I on "Research assessment in transition" / the Transformation of #ResearchAssessment in FRA & GER through #ResearchInformationSystems #Researchevaluation #FIS #CRIS #OpenScience #AI
https://pulse49.com/2024/12/14/revolutionizing-research-assessment-the-role-of-cris-open-science-and-ai/ -
Now @MsPhelps on #OpenResearchInformation for #ResearchEvaluation, with a special part for the @BarcelonaDORI
Bianca starts with the big question: "What kind of institutions do universities want to be?"
-
#OSICU24 started! Right now we listened to a talk by Danica Zendulkova from the Slovak Centre of Scientific and Technical Information on "Measuring and evaluation of the scientific disciplines #impact based on #CRIS system data"
I think you can still register at https://conference2024.dntb.gov.ua/en/ (ctrl + f "registration").
We will have a lot of presentations on #openresearchinformation, #openscience and #ResearchEvaluation in the afternoon, by @pmarrai @MsPhelps and others.
-
Gaming the metrics: Emanuel Kulczycki says the history of Soviet central planning in Eastern Europe has led to a focus on counting articles & the word count of books. Keynote on the last day of PUBMET. Many scholars maintain the status quo by doing the minimum at the lowest possible cost. Researchers are under resourced, so they find 'legal' loopholes to meet institutional requirements.
🧵
#ResearchAssessment #Metrics #ResearchEvaluation #ResearchReform #ResearchCulture #EasternEurope #PUBMET2024 -
Moin Hamburg! Wir freuen uns auch dieses Jahr bei der #bibliocon24 unsere Arbeit zu präsentieren und stellen bei der Postersession die Ergebnisse von Clara Schindler vor, die in ihrer MA untersucht hat, wie die Publikationen des @mfnberlin die #UNNachhaltigkeitszieleSDGs unterstützen.
Mi und Do, 12:45-13:45 Uhr, Halle H
P-34https://bibliocon2024.abstractserver.com/program/#/details/presentations/626
#researchevaluation #bibliometrics #scientometric #bibliometrie #scientometrie
-
An English version of the executive summary of the conference Avanti piano, quasi indietro: la riforma europea della valutazione della ricerca in Italia is now available here.
🖨 -
Le FNS recherche des partenaires externes pour analyser son processus d'évaluation et le format du CV. Nous nous réjouissons de recevoir votre candidature jusqu'au 2 avril.
https://sohub.io/d2f9
#researchonresearch #researchevaluation #narrativeCV -
The SNSF is looking for external partners for the analysis of its evaluation procedure and of the CV format. We look forward to receiving your application by April 2nd.
https://sohub.io/ucug
#researchonresearch #researchevaluation #narrativeCV -
Der SNF sucht externe Partner:innen für die Analyse seines Evaluationsverfahrens und des CV-Formats. Wir freuen uns auf Ihre Bewerbung bis am 2. April.
https://sohub.io/020y
#researchonresearch #researchevaluation #narrativeCV -
This summer the Italian National Agency for the Evaluation of University and Research Systems (ANVUR) denied scientificity and excellence (“classe A”) to Open Research Europe (ORE) for general sociology. Recently, however, ANVUR has updated its rules for classifying journals, adding an article 18 entitled ‘Transitional regulation for open peer […]
https://aisa.sp.unipi.it/anvurs-rule-state-evaluation-and-open-peer-review-in-italy/
-
1. An unpromising starting point
In a 2018 article, Alberto Baccini and Giuseppe De Nicolao described the Italian academic system as “an unprecedented in vivo experiment in governing and controlling research and teaching via automatic bibliometric tools”. Italian universities and research institutions are subject to widespread bibliometric […] -
CW: Academic paper tracking #365papers
203 / Making #Qualitative Data Reusable A Short Guidebook For Researchers And #DataStewards Working With Qualitative Data https://doi.org/10.5281/zenodo.7777519
Very nice guide with great examples!
204 / NOR-CAM - A toolbox for recognition and rewards in academic careers
205 / Follow the Leader: Technical and Inspirational #Leadership in Open Source Software https://doi.org/10.1145/3491102.3517516
-
An interesting article in #ResearchEvaluation based on a series of interviews:
📄 https://doi.org/10.1093/reseval/rvac009
"We have been looking at InCites too, and the main driver for this is we have just purchased [the] Pure". How is the #CRIS connected to analytical tool (from different vendors)?? 🤔
Answers like this always demotivate me from conducting interviews in research. A-la "Of course, I know how the h-index is calculated, because I've an impact factor"