home.social

#co-pilot — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #co-pilot, aggregated by home.social.

fetched live
  1. So this is a mini essay I wrote for my employers, to explain why I refuse to use AI tools at work. They have recently been pushing it and I wanted to make my position crystal clear and attempt to open up discussions. I'm not in a management position so don't have any voice when it comes to decision making. I also struggle to express myself verbally and miss out context.

    I initially sent it to my manager on Tuesday. She then had a meeting with her manager and brought it up, and he suggested I send it to him and 2 members of the extended leadership team above him, who are directly below the CEO.

    My managers managers response was very positive, he messaged me to say it was very powerful and he wanted to take the weekend to process.

    Anyway it's not my best writing but here it is.

    #aislop #copilot #antiai #ai

    Why I refuse to use AI tools such as Co-pilot, ChatGPT, Claude etc.
    Written by human hands and mind – Jax Ven****

    As *** leaders increase their push for employees to use AI tools, I would like to lay out the reasons why I refuse to do so. I feel the need to do this in order to show that I am not acting out of a fear of new technology, but as someone who understands technological progression and has been interested in this field for decades, studying virtual reality and AI at university 15 years ago and following the industry closely since. I also hope that this may convince you to pause and reflect, and commit to allowing every employee to choose for themselves if they wish to use AI tools, without being penalised or left behind should we choose not to.

    I have always been very optimistic about what AI could bring us and how it could benefit our lives not just in the workplace, but also at home and for society in general. However, to borrow a phrase used often in online tech circles, ‘this is not the AI we were promised’.

    Instead we have AI that is unreliable at best, and risking our lives and our environment at worst.

    The environmental impact of data centres is huge. A recent report by the IEA (International Energy Agency) found that data centre energy usage had surged during 2025 and was set to continue.

    “According to the report – Key Questions on Energy and AI – power consumption per AI task is declining rapidly, with efficiency improving at a rate unprecedented in energy history. However, more people are using AI, and energy-intensive uses – such as AI agents – are on the rise. As a result, electricity consumption from data centres is set to double by 2030, and power use from those focused on AI is poised to triple.”

    iea.org/news/data-centre-elect

    The IEA article goes on to speculate that AI may drive the creation and large-scale adoption of greener tech, but we are not there yet and the current state of play is dangerous and damaging to our environment right now, regardless of future potential. Future potential does not cancel out current harm.

    I do not wish to contribute to this.

    In addition to the environmental impact the creation of new data centres is having a detrimental effect on neighbouring communities with the blatant disregard for them. For example, residents of a town in Michigan voted overwhelmingly to not have a 21 Million square feet data centre built close to their town, with the town commission also voting in favour to reject due to the impact it would have on the local environment, electricity demand and increased traffic. Related Digital (OpenAI, Stargate Initiative) successfully sued the town and are going ahead anyway.

    fortune.com/2026/05/06/ai-data

    These data centres are costing billions and billions. The people paying for them are well aware that they have enough money to be able to do whatever the hell they like while making promises of increased opportunities and future green tech. All while they risk destroying the communities surrounding them.

    I do not wish to contribute to this.

    AI is now being used in war. The same companies that are used to summarise emails or generate a slide deck are being used in cyber defence.

    “WASHINGTON — On April 27, the Army convened 14 senior cybersecurity executives from leading technology companies at the Pentagon for the second iteration of its artificial intelligence tabletop exercise, an effort designed to accelerate adoption of agentic AI for cyber defense.
    The exercise, known as AI TTX 2.0, brought together C-suite leaders from companies including Amazon Web Services, Google, Microsoft, OpenAI, CrowdStrike, Palo Alto Networks and others alongside Army and Department of War leadership. The Office of the Principal Cyber Advisor hosted the half-day event, with design and moderation support from the Special Competitive Studies Project, and partnering organizations including U.S. Cyber Command, U.S. Army Cyber Command and the Army Cyber Institute at West Point.”
    army.mil/article/292158/army_c

    I do not wish to contribute to this.

    The effect of regular use of AI tools on cognitive function is still being studied but so far the results are extremely concerning. I enjoy using the skills I’ve developed over the last 30 years. I enjoy figuring things out and learning new things. I enjoy putting my thoughts into words with my own voice. These are the things that motivate me.

    I thoroughly believe that the more we rely on AI tools, the easier it will become to offload simple tasks to these tools and the temptation to have them do as much of our workload as possible is too great, especially when we are being told to use AI tools to increase our productivity.

    “A new MIT study titled, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, has found that using ChatGPT to help write essays leads to long-term cognitive harm—measurable through EEG brain scans. Students who repeatedly relied on ChatGPT showed weakened neural connectivity, impaired memory recall, and diminished sense of ownership over their own writing. While the AI-gener”ated content often scored well, the brains behind it were shutting down.

    publichealthpolicyjournal.com/

    I do not wish to be a victim of this.

    Things I am also concerned about but have not written about here in great detail (or this would be 20 pages long) are;

    ‘Enshittification’ of the internet: can no longer trust search results, or that academic papers, news reports, images, videos and music are not AI created.

    Security Risks; apps and software being developed by ‘vibe coding’ are being found to contain serious security flaws that would enable hackers to obtain sensitive customer and company data. Who is checking vibe coders code?

    forbes.com/sites/jodiecook/202

    AI is a technology that I would love to be using, and it should be a natural progression of my career. I should relish digging in and getting to know how everything works, being creative and finding new ways to use it. That’s who I am. I would fully embrace it and advocate for it. But not in it’s current format, with it’s current harms, and it’s current masters. The likes of Elon Musk, Sam Altman and Jensen Huang are billionaires who do not live in the same reality as the rest of us, and do not have our best interests at heart. The AI models these people are enabling are not the AI we were promised. For all of the reasons outlined above I cannot in good conscience contribute by becoming a user. This is at the core of my ethics and my beliefs, and it would devastate me to be forced to take part. This may seem dramatic but I am just one of many, many people worldwide who are also refusing to take part and that number is growing day by day. I guess we are ‘conscientious objectors’.

    It’s not just about an individuals personal use. One could argue that the amount of energy one person uses or their monetary contribution to AI companies from simple day to day workplace tasks is not great enough to be an issue. However, it is about collective use and about ethical standpoints. Do we, as a company with a mission to help people embrace greener technology, really want to contribute to all of these things? Sometimes the only power we have is to choose where our money goes. It’s something I do as an individual consumer and something that companies can do on a grander scale to take a stand and be on the right side. Yes, I understand the need to increase productivity and remain competitive but we were already on the right track before the push to use AI tools. I also believe it is a mistake to rely on them too much as subscription costs are set to soar and the ‘AI Bubble’ predictions are looking more and more likely. I think it’s far better to pause or greatly limit use, allow employees to decide they don’t wish to use it at all, and see what the state of play is in a year or two. ‘Fear of Missing Out’ is a very real phenomena that I sadly see playing out here.

    I guarantee I am not the only one at *** who feels this way, but with the job market as it is right now (thanks to AI) it can be very risky to speak out. I know people in other companies who are being forced to use AI tools or risk losing their jobs and I would like to think that we are better than that at Pod, but this still feels risky. However I cannot stay silent any more and need to make my position, and my reasoning for this position crystal clear and hope that everything I have outlined can be given serious thought.

    Thank you for reading and I look forward to discussing this in more detail should you wish.

  2. #Copilot has been closing some notable gaps lately, especially in #Outlook, but one big one that remains is the ability to work with Microsoft To-Do tasks (which are basically Outlook tasks now).

  3. What’s new in Power Platform: May 2026 feature update
    smarter Copilot experiences, faster app builds, deeper automation, and new governance tools. Big step toward more secure, AI‑driven low‑code. #PowerPlatform #Copilot #LowCode #AI #Microsoft
    microsoft.com/en-us/power-plat

  4. What’s new in Power Platform: May 2026 feature update smarter Copilot experiences, faster app builds, deeper automation, and new governance tools. Big step toward more secure, AI‑driven low‑code. #PowerPlatform #Copilot #LowCode #AI #Microsoft

    What’s new in Power Platform: ...

  5. #Copilot is getting better quickly, but one place I still see a big gap between Copilot and Claude (and ChatGPT) is the connectors to 3rd party services. At the moment it's a lot easier to connect to those other services with those other AI tools.

  6. It appears people are desperately trying to ride on the #AI hype train and create new, artificial products. In the last few days one of my projects was indexed by some skills indexing site and another service converted my agent instructions into other formats. Uh, thanks?! I haven't asked for this…

    #LLM #GenAI #Copilot #Claude

  7. winbuzzer.com/2026/05/15/micro

    Microsoft is winding down Anthropic's Claude Code in its Experiences + Devices org & moving devs across Windows, Microsoft 365, Outlook, Teams, and Surface to GitHub Copilot CLI.

    #AI #ClaudeCode #GitHubCopilot #Microsoft #GitHub #Copilot #Anthropic #Claude #AICoding

  8. Having a direct comparison of Copilot vs Claude for the first time.

    First impression: Claude gives better answers, that are less verbose and more comprehensive.

    Interaction during agentic coding feels a bit more fluent with Claude.

    #coding #ai #claude #copilot

  9. I guess I'll get used to this entry point, but not sure what's up with the weird branding. Can we switch the icon to #Clippy? 🤣 #Copilot

  10. Power Apps introduces closed‑loop learning: agents now learn from real user corrections, turning feedback into org‑wide improvements. Less manual fixing, smarter automations, better accuracy over time. #PowerApps #AI #Automation #EnterpriseAI #Copilot #MCPServer

    Closed-loop learning on the Po...

  11. Power Apps introduces closed‑loop learning: agents now learn from real user corrections, turning feedback into org‑wide improvements. Less manual fixing, smarter automations, better accuracy over time.
    #PowerApps #AI #Automation #EnterpriseAI #Copilot #MCPServer
    microsoft.com/en-us/power-plat

  12. Our favourite guests at Mini shows are always the Co-Pilots 🐾

    Check out our Roma, Winnie, Otto and Poppy being the best models there are at @britishminiclub (instagram.com/britishminiclub) Himley Hall last weekend ❤️

    Shout out to @stellarphotography2023 (instagram.com/stellarphotograp) for the pics 📷

    If you have a Mini Co-Pilot, check out @minicartags (instagram.com/minicartags) for some fab accessories for them!

    #mini #minigirlsuk #copilot #minicooper #doggyphotoshoot

    instagram.com/p/DYVAvTJjr8Z

  13. We are THREE WEEKS away from Tennessee's next Microsoft Community Day in #Memphis. Are you registered??? 🚨🚨🚨🚨
    FREE tickets are available!

    —-
    #Microsoft365 #azure #ai #copilot #github #powerbi #microsoftfabric t.co/CQaTagxxaq

    — Daniel Glenn (@danielglenn)
    May 14, 2026

  14. We are THREE WEEKS away from Tennessee's next Microsoft Community Day in #Memphis. Are you registered??? 🚨🚨🚨🚨
    FREE tickets are available!

    —-
    #Microsoft365 #azure #ai #copilot #github #powerbi #microsoftfabric t.co/KUg8YTXZ6J

    — Daniel Glenn (@danielglenn)
    May 14, 2026