Metadata
Title
Getting started with AI for Professional Services Staff
Category
undergraduate
UUID
19ac0264b6134f839600542c4fe40e0e
Source URL
https://oerc.ox.ac.uk/ai-centre/ai-guides/getting-started-with-ai-for-profession...
Parent URL
https://oerc.ox.ac.uk/ai-centre
Crawl Time
2026-03-09T03:35:38+00:00
Rendered Raw Markdown

Getting started with AI for Professional Services Staff

Source: https://oerc.ox.ac.uk/ai-centre/ai-guides/getting-started-with-ai-for-professional-services-staff Parent: https://oerc.ox.ac.uk/ai-centre

This page contains guidance and advice to get Professional Services Staff started with generative AI tools:

What can professional services staff use generative AI for?

Generative AI offers significant opportunities to enhance professional practice at the University when used with care, professional judgement, and an up-to-date understanding of how it works. These tools function by using complex models and a suite of background tools, like search, canvas, or data analysis, to respond to your prompts. They offer utility in that they can help disciplined users to eliminate legacy workflows and transition their active daily tasks to focus on work-streams related to strategy, innovation, and impact.

Staff can use generative AI for these and many other tasks:

Professional staff should always review, revise, and contextualise AI-generated outputs.

You can find out how other professional services staff have been using AI at Oxford with our AI in Professional Services Case Studies.

Back to top

Benefits

When used thoughtfully with critical oversight, generative AI can support:

Back to top

University-Supported AI Tools

These tools have enterprise agreements with the University of Oxford. When you are signed in with your SSO, they provide data security and privacy protections, making them suitable for work involving confidential information.

Other AI-powered applications

These tools can be useful for a range of tasks but do not have enterprise agreements with the University. In line with University information security policy, they must not be used for Confidential or Secret information. They should only be used for information classified as Public.

Back to top

10 Guidelines for Using Generative AI

1. AI Has Knowledge, But It Can Be Inattentive

It's a misconception that AI has no concept of knowledge or that it simply predicts the next word. It's more accurate to say that it chooses the most appropriate next token (a word or part of a word) using complex techniques. The model does have knowledge; however, like a human, it can be inattentive to the need to check its facts, which can lead to errors.

2. Its Power Comes From Tools

An AI model's ability to search the web, perform calculations, or recall past conversations isn't magic. It's achieved by calling on tools. Hidden instructions tell the model when to use a specific tool—like a search engine or a code interpreter—to find an answer or perform a task. Without tools, an AI is limited to its own internal, and sometimes outdated, knowledge.

3. Develop Your Intuition for Evaluation

Always verify AI outputs. Rather than aiming for a fixed "80% completion" from the AI, it's more effective to develop an intuition for when it is more efficient to stop prompting and start editing the output yourself. Errors are a normal part of the process; they often occur because the AI was "inattentive," its context window was too long, or it failed to use the right tool correctly. Your professional expertise is crucial for spotting these issues and finalising the work. Our trainings will help you develop this intuition.

4. It Can Be Both Creative and Comprehensive—With the Right Tools

The idea that AI is only for creative tasks and not comprehensive ones is becoming less true. An AI's ability to handle a comprehensive task where nothing can be missed depends almost entirely on whether it has been given a tool for the job. If a task requires a specific checklist or process, the AI will likely fail unless a tool is available to guide it.

5. Outputs Have Limited Reproducibility

The same prompt can produce different results for you and a colleague. This isn't entirely random. It's because an AI's context consults your entire chat history. Since your colleague has a different history, the AI is working with different information, leading to a different output.

6. Errors Are Normal and Often Explainable

It's normal for AI models to make mistakes or throw up errors. These failures are often explainable. They can happen because the conversation became too long and information fell out of the model's "context window," it failed to call the right tool, or it retrieved the wrong snippet of information to answer your question. Simply refresh the page and try again.

7. AI-Generated Text Cannot Be Reliably Detected (In Individuals)

AI detection tools may be useful for analyzing thousands of documents in aggregate, but they are not reliable for judging a single piece of work. Furthermore, these detectors can be biased against text written by neurodivergent individuals or non-native English speakers.

8. Use the Right AI Product for the Right Job

Different AI products, like ChatGPT and Microsoft 365 Copilot, may use the same underlying model, but they are not the same. Each product is a unique interface that gives the model access to different tools and system prompts. Always choose the product most suited to the task at hand.

9. Test the Limits (But Don't Ask the AI About Itself)

AI capabilities are constantly changing. If you wonder whether a tool can do something, the best way to find out is to try it. However, its not a good idea to ask the model about its own features or limitations. It is an unreliable manual for itself and will often provide incorrect information.

10. It Follows Hidden Instructions

In every chat, a hidden "system prompt" is working in the background. This prompt gives the AI its core instructions on how to behave, what personality to adopt, and, most importantly, which tools it can call and when to use them. This helps explain why it acts differently across various platforms and tasks.

Back to top

Tips

Prompts

Email review prompt:

"Review this draft email to a University committee. Check for clarity, professional tone, and conciseness. Suggest improvements to ensure the key action points are unambiguous."

Project plan prompt:

"Create a detailed project plan for a six-month system implementation. Include clear criteria for each phase: discovery, configuration, user acceptance testing, and rollout. Ensure it outlines key deliverables and milestones. Ask me clarifying questions before you begin."

Pre-post-mortem Prompt:

"Review this detailed project plan. Assume that it is six months in the future and the project has failed. Investigate why this project has failed, make a list of its failure points, and rewrite the project plan to account for these to prevent failures of this type and secure project success.”

Metaprompting

If you are unsure how to frame your task, you can ask the AI to write the prompt for you. This is useful for creating clear instructions for CustomGPTs or for setting up complex tasks.

Example: “I am creating a CustomGPT to act as a helpdesk assistant that answers staff queries about the University's expenses policy. Write a detailed prompt that defines its role, its professional and helpful tone, and how it should respond if it cannot find an answer in the uploaded policy documents. Write the instructions using markdown so I can copy and paste them.”

Dictation

Use the microphone button to speak your ideas rather than typing. Speaking allows for faster brainstorming, and AI can work effectively with unstructured language.

Back to top

Policy & Guidance

The University of Oxford has established clear guidance for the use of generative AI. All professional services staff must adhere to these policies, particularly concerning information security and professional responsibility.

Key principles include:

Please refer to the University's central guidance for comprehensive details:

Back to top