home.social

#researchevaluation — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #researchevaluation, aggregated by home.social.

  1. A recent Journal of Informetrics study shows – There is no universal number of “too many authors.”

    In some fields, 3–6 may already be unusual.
    In medicine – dozens are common.
    In physics – large teams are often the norm.

    :doi: doi.org/10.1016/j.joi.2026.101

    Yes, #hyperauthorship can signal problems (e.g., honorary authorship, metric inflation). But the key question is not “how many authors?” 👉 it is: Is this abnormal for this field and time?

    #Scientometrics #ResearchEvaluation #Bibliometrics

  2. We evaluate science mostly through papers. But researchers report that up to 75% of project effort is data work — collecting, cleaning, documenting, and preparing datasets. A reminder that research outputs ≠ research work.

    New paper in Research Evaluation: doi.org/10.1093/reseval/rvag008

    #ResponsibleMetrics #OpenScience #DataCitation #ResearchEvaluation

  3. Most research evaluation still rewards papers, not the work that makes them possible. Yet researchers say up to 75% of a project can be data work: collecting, cleaning, curating, documenting.

    :doi: doi.org/10.1093/reseval/rvag008

    Maybe it's time to stop pretending that publications alone represent research.

    #OpenScience #ResearchEvaluation #DataCitation #ResponsibleMetrics #Scientometrics

  4. New paper in Research Evaluation explores how researchers actually cite data. Key insight: data citations are far more complex than simple indicators of data reuse.

    :oa: doi.org/10.1093/reseval/rvag008

    They reflect scientific practice, community norms, attribution, and even reputation-building. A timely reminder: metrics alone cannot capture the real value of data work.

    #OpenScience #DataCitation #ResearchEvaluation #ResponsibleMetrics #Scientometrics

  5. "Many of the loudest Open Science advocates are deeply embedded in the very systems they critique such as traditional publishing, prestige-driven academia and grant-dependent research cultures. They speak the language of reform while continuing to “play the game” remarkably well. Researchers who sit on advisory boards talk about preprints but then celebrate publishing their latest Nature paper"

    themodernpeer.com/people-the-p

    #OpenScience #OpenData #ScienceReform #Metascience #ResearchEvaluation #UniversityRankings #PublishOrPerish

  6. A new article by Chloe Patton shows how debates about #OpenScience often slip into absurdity – like demanding #replication from the #Humanities. You can’t replicate history, culture, or interpretation the way you replicate a physics experiment. It’s a different kind of knowledge.

    :doi: doi.org/10.1093/reseval/rvaf052

    Forcing STEM-style standards onto the humanities doesn’t improve #science – it just adds bureaucracy and limits academic freedom.

    #Reproducibility #ResearchEvaluation #Replicability

  7. Today at my alma mater, I spoke about how research evaluation is quietly shifting from citations to ChatGPT-style predictions.

    👉 doi.org/10.13140/RG.2.2.30585.

    AI can already “detect quality” from text alone, and sometimes performs better than classic metrics. But it doesn’t evaluate science: it rewards what sounds like good science. We may be heading from “publish or perish” to the new absurdity: “write ChatGPT-friendly or perish.”

    #AI #ChatGPT #ResearchEvaluation #Scientometrics #LLM #OpenScience

  8. The recent debate in JoI highlights a key issue often ignored in research evaluation - the impact of document types on citation indicators:

    :doi: doi.org/10.1016/j.joi.2025.101

    When all publication types are counted, normalized metrics become inconsistent and misleading. But once we restrict the analysis to articles and reviews, correlations rise sharply, and results become robust and reproducible.

    #ResearchEvaluation #Bibliometrics #SciencePolicy #Ukraine #Metrics

  9. I currently have about a dozen papers under review. Now imagine: a drone hits my window — and who will keep emailing editors and reviewers then? 😅

    Stewart Manley published his brilliant idea in #ResearchEvaluation the “exclusive option”. Authors could submit to multiple journals at once, and interested editors request an exclusive right to review.

    :doi: doi.org/10.1093/reseval/rvaf02

    No duplicated #peerreview. No endless delays. This could shake up academic publishing!

    #AuthorRights #OpenScience

  10. Honored to receive an Award of Appreciation from the Ministry of Education and Science of Ukraine for my contribution to the evaluation of research projects. Proud to stand with Ukrainian science.
    #UkraineScience #ResearchEvaluation #ScienceForUkraine #OpenScience #PeerReview #DistributedPeerReview

  11. Most people still don't know: publishing in real scientific journals is usually free.
    A new study shows how pseudo-journals in 🇮🇳 exploit researchers, offering "cheap" publications for just $25.

    :doi: doi.org/10.1093/reseval/rvaf01

    The authors of the #ResearchEvaluation study advise: carefully check open access models and journal fee policies. Education and critical thinking remain our main tools against academic fraud.

    #OpenAccess #OpenScience #AcademicIntegrity #PredatoryJournals #OpenJournals

  12. A brilliant new paper in #ResearchEvaluation 📄 compares how the UK, Norway, and Poland implemented research impact assessment - with very different results:

    🇬🇧 UK built infrastructure, #REF became a strategic tool.
    🇳🇴 Norway took a soft, formative approach.
    🇵🇱 Poland copy-pasted, spent big - got confusion, “point-chasing” and no culture shift.

    :doi: doi.org/10.1093/reseval/rvaf01

    🇺🇦 A lesson Ukraine risks ignoring again.

    #ImpactAssessment #SciencePolicy #ResearchImpact #AcademicCulture

  13. 🇧🇷 vs 🇳🇱 in #ResearchEvaluation? A sharp comparative study shows how Brazil’s high-stakes, performance-based model contrasts with the Netherlands’ strategic, decentralized approach.

    :doi: doi.org/10.1093/reseval/rvaf01

    Takeaway: Evaluation isn’t one-size-fits-all - context matters.

    #ResearchAssessment #SciencePolicy #HigherEducation #ResponsibleMetrics #AcademicEvaluation

  14. An entertaining, informative and overall really well done video about the h-index and why you shouldn't use it, by @stefhaustein, @carey_mlchen et al.

    I really will be sharing this video a lot: "What is the h-index and what are its limitations? Or: Stop using the h-index"

    youtube.com/watch?v=HSf79S3XkJw

    #hIndex #bibliometrics #researchEvaluation #researchAssessment #publishOrPerish

    @academicchatter

  15. Now @MsPhelps on #OpenResearchInformation for #ResearchEvaluation, with a special part for the @BarcelonaDORI

    Bianca starts with the big question: "What kind of institutions do universities want to be?"

    #BarcelonaDeclaration #OSICU24

  16. #OSICU24 started! Right now we listened to a talk by Danica Zendulkova from the Slovak Centre of Scientific and Technical Information on "Measuring and evaluation of the scientific disciplines #impact based on #CRIS system data"

    I think you can still register at conference2024.dntb.gov.ua/en/ (ctrl + f "registration").

    We will have a lot of presentations on #openresearchinformation, #openscience and #ResearchEvaluation in the afternoon, by @pmarrai @MsPhelps and others.

    #OSICU24

  17. Gaming the metrics: Emanuel Kulczycki says the history of Soviet central planning in Eastern Europe has led to a focus on counting articles & the word count of books. Keynote on the last day of PUBMET. Many scholars maintain the status quo by doing the minimum at the lowest possible cost. Researchers are under resourced, so they find 'legal' loopholes to meet institutional requirements.
    🧵
    #ResearchAssessment #Metrics #ResearchEvaluation #ResearchReform #ResearchCulture #EasternEurope #PUBMET2024

  18. Moin Hamburg! Wir freuen uns auch dieses Jahr bei der #bibliocon24 unsere Arbeit zu präsentieren und stellen bei der Postersession die Ergebnisse von Clara Schindler vor, die in ihrer MA untersucht hat, wie die Publikationen des @mfnberlin die #UNNachhaltigkeitszieleSDGs unterstützen.
    Mi und Do, 12:45-13:45 Uhr, Halle H
    P-34

    bibliocon2024.abstractserver.c

    #researchevaluation #bibliometrics #scientometric #bibliometrie #scientometrie

  19. AISA @associazione-italiana-per-la-promozione-della-scienza-aperta@aisa.sp.unipi.it ·

    An English version of the executive summary of the conference Avanti piano, quasi indietro: la riforma europea della valutazione della ricerca in Italia is now available here.
    🖨

    https://aisa.sp.unipi.it/slowly-forward-almost-backward-reforming-european-research-evaluation-in-italy-executive-summary/

  20. Le FNS recherche des partenaires externes pour analyser son processus d'évaluation et le format du CV. Nous nous réjouissons de recevoir votre candidature jusqu'au 2 avril.
    sohub.io/d2f9
    #researchonresearch #researchevaluation #narrativeCV

  21. The SNSF is looking for external partners for the analysis of its evaluation procedure and of the CV format. We look forward to receiving your application by April 2nd.
    sohub.io/ucug
    #researchonresearch #researchevaluation #narrativeCV

  22. Der SNF sucht externe Partner:innen für die Analyse seines Evaluationsverfahrens und des CV-Formats. Wir freuen uns auf Ihre Bewerbung bis am 2. April.
    sohub.io/020y
    #researchonresearch #researchevaluation #narrativeCV

  23. AISA @associazione-italiana-per-la-promozione-della-scienza-aperta@aisa.sp.unipi.it ·

    This summer the Italian National Agency for the Evaluation of University and Research Systems (ANVUR) denied scientificity and excellence (“classe A”) to Open Research Europe (ORE) for general sociology. Recently, however, ANVUR has updated its rules for classifying journals, adding an article 18 entitled ‘Transitional regulation for open peer […]

    https://aisa.sp.unipi.it/anvurs-rule-state-evaluation-and-open-peer-review-in-italy/

  24. AISA @associazione-italiana-per-la-promozione-della-scienza-aperta@aisa.sp.unipi.it ·

    1. An unpromising starting point
    In a 2018 article, Alberto Baccini and Giuseppe De Nicolao described the Italian academic system as “an unprecedented in vivo experiment in governing and controlling research and teaching via automatic bibliometric tools”. Italian universities and research institutions are subject to widespread bibliometric […]

    https://aisa.sp.unipi.it/taking-all-the-running-one-can-do-to-keep-in-the-same-place-anvurs-complicated-relationship-with-the-coara-agreement/

  25. CW: Academic paper tracking #365papers

    203 / Making #Qualitative Data Reusable A Short Guidebook For Researchers And #DataStewards Working With Qualitative Data doi.org/10.5281/zenodo.7777519

    Very nice guide with great examples!

    204 / NOR-CAM - A toolbox for recognition and rewards in academic careers

    205 / Follow the Leader: Technical and Inspirational #Leadership in Open Source Software doi.org/10.1145/3491102.351751

    #ResearchEvaluation #RDM #OpenSoftware

  26. An interesting article in #ResearchEvaluation based on a series of interviews:

    📄 doi.org/10.1093/reseval/rvac00

    "We have been looking at InCites too, and the main driver for this is we have just purchased [the] Pure". How is the #CRIS connected to analytical tool (from different vendors)?? 🤔

    Answers like this always demotivate me from conducting interviews in research. A-la "Of course, I know how the h-index is calculated, because I've an impact factor"

    #bibliometric #infrastructure #librarians