top of page

COPILOT AGENTS - MAKING OLD STUFF GREAT AGAIN

Writer's picture: Jonathan StuckeyJonathan Stuckey

Updated: 4 days ago

author: Jonathan Stuckey


Looking at the practical application and impact of Copilot (read: Brand) and Agents (read: bot UI) for SharePoint, I'm hard pressed to not be impressed with the output. In light of the ease-of-packaging you have to take a step-back and remember that what is presented has to be reviewed in real-world context, and make sure you are applying practical quality-assurance steps when used.


So I'm going to give you some key-points from recent discussions with keen observers, not just my observations.


Background

I have been neck-deep in Generative AI and associated technologies for a long-time, heck I did my degree thesis on Natural Language Interfaces and Genetic Algorithms before they were even practical - 30 years ago. So when I see something like Copilot Agents effectively giving me what I was after all that time ago I get excited.


Then I remember I'm from Yorkshire, so I dampen the enthusiasm with a 10lb gob of real-world experience to balance up the giddiness, and start taking it apart to see where its been bodged or prettied-up to distract the general user.


What to watch-out for when adopting?

At the end of 2024 the number one thing to watch-out for was hype. The noise-level across all channels were deafening and often confused - and 2025 looks no different so far.


Secondly, there are a lot of people claiming to know what they are doing, but who don't have practical real-world experience of when AI adoption meets frightened people. We all need to understand that people are still trying to figure out what the landscape of AI, Generative AI (not the same thing), Machine Learning and Automation will mean.


When you look at SharePoint Agents, you can let your smart users lose with a minimum of risk because the out-of-the-box experience is Wizard driven and just requires:


  • Knowing where and what you key content is

  • Knowing abit about the process or activity you are providing an agent for

  • How to write a structure query, which takes account of 'Prompt' design practices


Sounds easy right? Well here's a summary of what I took away...


Terminology

Recommendation: Establish a glossary of terms, and then get someone sensible to review it with users.


Idea: this is a great tasks to pilot and test-out Copiot BizChat Generative AI with.


Why? Specialists, technical people and marketeers have established a wide-range of specific acronyms and AI terms which you will need to have a basic handle on to even stand a chance of getting things right. This is 'protectionism' in action. Lets freeze-out non-specialists really early before they get it.


The basics are pretty easy in definition and concepts, but it gets murky pretty quickly. Key ones are easy to identify, and I used this as my off-the-cuff prompt for starting:


Create a basic set of terms and definitions related to Generative AI, Microsoft Copilot and SharePoint Agents. These need to be suitable for a non-IT focused person to be able to comprehend the basics when being onboarded to using these tools in business context. The definitions to include reference to general AI concepts like Content Sources, Grounding, Bias, Prompts, Bot, Refinement, Agents etc as well as Microsoft Copilot, SharePoint and potentially Purview related terms. These should be listed in an alphabetical order, in a tabular format, where attributes include: Term identified, a basic description, the scope term relates to for Generative AI topic, Microsoft Copilot, SharePoint, Agents or wider 365 platform; and URL or Link to an authoritative resource which can be checked.


...it needed some more work, but hey - 25-seconds wasn't bad for starting point.


Agent Grounding

Recommendation: engage actual business subject experts to help define and train your pilot Agents on real business topics - Do Not just use IT or a Data Engineer heavy team.


Idea: Pilot should be mostly business SMEs, and not just the most vocal. Include weekly (min) round-up and recommendation feedback in the tool UI (i.e. acknowledge good vs. bad results, and give details why).


Why? Training of your Copilot model or SharePoint Agent has risk of being skewed by early-adopter feedback. Especially if you are relying on one or two IT people who are not dealing with core front-facing business.

image of carvings of the 3 wise monkeys - creative commons v3 John Snape
See no evil, hear no evil, speak no evil - but lost the nuts

It really pays to understand how Microsoft Copilot and SharePoint Agents work, what the policies and controls do, and how agent ground breaks when dealing with Grounding.


When you start with Copilot the Large Language Model and weightings start to mold around the feedback and usage provided. If you pilot in a small, specific group this skews the weighting based on the most prolific responders. These are often not indicative of the majority of your users.


Balance and rigorous (project team) peer review is critical. Unless you want to do a full reset of your tenancies 'learnt' model behaviours, and start again in roll-out...


User Training

Recommendation: Go heavy into user training on use-cases, scenarios and live examples - providing good examples of Prompt construction and definition. Include Peer review and active (planned) sharing of prompts and responses.


Idea: Make sure to train your people about critical thinking in questioning, in order to get the best model of agent definition and how to identify (and reduce) weakness and inaccuracy in your model.


Why? With an agents grounded in your content, the need for users know your source content well enough to critique agent responses is as important as getting the right content, or managing the access when sharing.


Every generative AI tool available now shouts out "AI-generated content may be wrong", so the emphasis is that this is a tool like a spell-checker, or formula generator. It is the users responsibility to make sure that the output is suitable and correct.


Until there are some really good ways to recognize AI Generated content that is incorporated to day-to-day output, we have to assume AI was used and no one wants to be a scape-goat.


Quality assurance and accountability

Recommendation: Train for user vigilance with how responses are used, and incorporate Quality review of the content and references used with Topic subject-experts or functional owners.


Idea: Introduce a community peer-review, and have accountability impacts that are real. You don't want to scare people away, but they need to understand their actions (or lack of) have consequences when using these tools.


Why? QA and editorial skills are essential because ultimately if you "publish" or use Agent generated content, then you are responsible for the output. If it's wrong, that's your fault. Unfortunately not many organisations have training for operational roles which includes how to critically analyse information, and perform editorially review i.e. how to assesses the content for quality, and suitability based on the organisations writing | content standards e.g. ensuring content accuracy, clarity, and relevance to the target audience


To change behaviours skills need to be taught and (continually) reinforced, so change management is not about a project its about change of organisational practice - long term.


Summary


Quick and useful value can be delivered by targeting Agents to specific content or processes. Mostly ones where your people know the content, process, and activities are 'somewhere', but they would normally speak to whoever is your go to person to get the answers.


Generating a list of target roles, business activities, and types of content/documentation that could benefit from AI and Agents is shockingly simple. (see my next article). Making the business change to accept them and improve its working practices - well that's harder.


Generated content water-marking is not yet available, and template embedded disclaimers will only work in some business areas. Until we have good methods of identification we need to have excellence in our professional practices.


Recommendation.


Personal learnings from using Microsoft 365 Copilot:


  1. be very familiar with the source content the agent has been grounded in

  2. turn off the magic "AI" button to ensure response comes from the appropriate sources*

  3. always QA the output, and rate responses - good and bad

  4. practice your prompts and refinement questions

  5. peer-review prompts and output


*this is a Copilot Studio option when editing an Agent definition


Learnings when creating Agents:


  1. do a basic foundation training course - its really useful

  2. creating Copilot Agents (née Power Virtual Agents) - helps in fine-tuning scope

  3. watch Daniel Anderson's session on "The flipped conversational model"

  4. schedule a Copilot Studio course once confident with Prompts


Organisation adoption of SharePoint Agents


  1. limit access with policy to pilot group

  2. trial process on known or managed datasets e.g. company policies, code of conduct..

  3. validate access permissions and roles for content

  4. establish your deployment process checklist (or ask me)

  5. establish 'agent' best-practices for Copilot Studio editing


Want to know more?

Well I will be recording and providing some real-world examples which I'll post in my next article shortly, but I recommend you stop messing around and give me a call.


Resources


About the author: Jonathan Stuckey

20 views0 comments

Recent Posts

See All

Comments


©2024 by What's This...?

bottom of page