#llmchatbots — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #llmchatbots, aggregated by home.social.
-
One task that staff & students outsource to #llmchatbots is transforming #BibTex from a publisher website or library catalogue into particular academic citation styles. To save resources, we can look into alternatives like Chris Proctor's pybtex-apa7-style #Python plugin for #APA7. I have adapted his plugin in a script with widgets for reuse in Colab:
https://github.com/MonikaBarget/DigitalHistory/blob/master/BibTex_conversion.ipynb
Maybe this is of help to others. MLA & other citation styles have no pybtex plugins yet, but this may still change. 🤞🏼
-
When ChatGPT summarises, it actually does nothing of the kind. – R&A IT Strategy & Architecture
Link
📌 Summary: LLM Chatbots, like ChatGPT, are not effective at generating reliable summaries of texts. Instead, they tend to simply shorten the text, often omitting important information or misrepresenting key points. This is due to a fundamental difference between the two processes: summarising requires a deep understanding of the text, while shortening does not. The use of LLM Chatbots in business and professional settings requires careful consideration, as they may not be able to provide the level of reliability and accuracy needed for critical tasks.
🎯 Key Points:
- LLM Chatbots tend to shortening rather than summarising texts.
- The process of summarising requires a deep understanding of the text, while shortening does not.
- LLM Chatbots are influenced by two key inputs: the parameters (based on training material) and the context (the prompts and answers up until the last generated or user-typed text).
- The parameters often dominate the summary, particularly for widespread topics, while the context dominates when it is relatively small and the subject is not well-represented by the parameters.
🔖 Keywords:
#LLMChatbots
#ChatGPT
#Summarising
#TextShortening
#Parameters
#Context
#TrainingMaterial
#GenerativeAI