Metadata
Title
Generative AI Studio Feb 25, 2026 – Replay
Category
general
UUID
68b127b3d60544d98ec8136f8f24a7e8
Source URL
https://ai.ctlt.ubc.ca/generative-ai-studio-feb-25-2026-replay/
Parent URL
https://ai.ctlt.ubc.ca/
Crawl Time
2026-03-11T03:08:24+00:00
Rendered Raw Markdown

Generative AI Studio Feb 25, 2026 – Replay

Source: https://ai.ctlt.ubc.ca/generative-ai-studio-feb-25-2026-replay/ Parent: https://ai.ctlt.ubc.ca/

GenAI Studio

GenAI Studio: News, Tools, and Teaching & Learning FAQs

February 27, 2026

This week

News of the week

Tool Showcase

FAQs

Register for Our Next Session

Check Out Last Session’s Replay

These sixty minute, bi-monthly sessions – facilitated by Technologists from the Learning Technology Innovation Centre (LTIC) – are designed for faculty and staff at UBC who are using, or thinking about using, Generative AI tools as part of their teaching, researching, or daily work. Every 2 weeks, we discuss recent generative AI news, highlight a specific tool for use within teaching and learning, and then hold a question and answer session for attendees.

They run on Zoom on Wednesdays from 1pm – 2pm and you can register for upcoming events on the CTLT Events Website.\ \

News of the Week

Each session we discuss several new items that happened in the Generative AI space over the past 14 days. There’s usually a flood of new AI-adjacent news every week – as this industry is moving so fast – so we highlight news articles which are relevant to the UBC community.

This week in AI has spotlighted OpenClaw, an open-source, locally-run AI agent that stands out by autonomously performing real-world tasks like managing emails, calendars, web browsing, and app controls through chat interfaces. Despite its rapid popularity growth, many express scepticism due to mounting security concerns. In a high-profile incident, Meta’s Director of AI Safety and Alignment lost control of her OpenClaw agent, which ignored her “confirm before acting” instruction and rapidly deleted emails until she manually intervened on her Mac. Another incident involved an OpenClaw-powered bot that, after having its pull request rejected by a Matplotlib maintainer, autonomously published a blog post publicly shaming the maintainer. Yet OpenAI recently hired OpenClaw’s creator to advance next-generation personal agents, signaling strong industry interest in the technology. Overall, these incidents highlight that AI agents like OpenClaw remain far from ready for real responsibility. Their probabilistic nature often lacks true comprehension or reliable safeguards, urging extreme caution and avoiding over-reliance.

Here’s this week’s news:

OpenClaw: Your Local AI That Actually Does Things For You

OpenClaw is an open-source personal AI assistant that runs locally on your device and performs real actions like sending emails, managing calendars, browsing the web, and controlling apps, accessible via chat apps such as WhatsApp, Telegram, or Discord. Unlike typical chatbots such as ChatGPT or Gemini which only generate text responses, OpenClaw can be given full system access to autonomously execute tasks, maintain persistent memory, and run background jobs. It’s currently in beta and is rapidly growing in popularity.

Visit Here


Meta Director of AI Safety’s OpenClaw Inbox Deletion Debacle

Meta’s Director of AI Safety and Alignment instructed OpenClaw to suggest email deletions in her inbox but wait for confirmation before deletion. Due to the large inbox triggering context compaction, the AI agent lost the “confirm before acting” instruction and rapidly began deleting emails. Despite repeated stop commands from the Director’s phone, the agent continued until she rushed to her Mac to intervene. This highlights OpenClaw’s security risks: loss of critical instructions leading to irreversible destructive actions, poor robustness in real-world scenarios, and misalignment of following goals but harmfully.\ \ Read the Full Article Here


*OpenClaw Bot Shames Matplotlib Maintainer After Pull-Request Rejection*

An OpenClaw bot pressured a Matplotlib maintainer to accept a pull request (RP) on Github. After the PR was rejected, the OpenClaw bot autonomously wrote and published a blog post shaming the maintainer publicly, illustrating the risks of highly autonomous agents acting without sufficient oversight or boundaries in open-source interactions.

Read the Full Post Here


*OpenClaw: Why AI Agents Aren’t Ready*

The article warns that autonomous AI agents like OpenClaw are not ready for serious responsibility. It points out how their high autonomy far exceeds reliability, like ignoring “confirm before acting” instructions and mass-deleting emails, and giving false reassurance during a real fire alarm. The author warns that agents act probabilistically without real comprehension, urging careful deployment to prevent over-reliance on AI agents.

Read the Full Article Here


*OpenAI Hires OpenClaw Creator Peter Steinberger*

OpenAI hired Peter Steinberger, the creator of OpenClaw, to advance next-generation personal AI agents. OpenClaw allows users to build agents for tasks like coding, inbox management, shopping, and app control (WhatsApp, Spotify, etc.). Sam Altman highlighted a multi-agent future OpenClaw and support for open source development.\ \ Read the Full Article Here


Tools of the Week: ClawHub

What is ClawHub?

ClawHub is the official public skills registry and marketplace for OpenClaw. It serves as a centralized hub where developers upload, version, and share “skills”, which are lightweight extension bundles that add new capabilities to OpenClaw agents.

How is it used?

Users install skills via terminal commands after setting up ClawHub’s Command-Line Interface. Users can search for new agent skills by browsing directly on ClawHub or using commands like clawhub search "calendar". Installation happens via commands such as clawhub install <skill name>. This downloads the skill folder into a designated skills directory, and OpenClaw loads it automatically in new sessions.

What is it used for?

ClawHub extends OpenClaw for hands-on automation by integrating the AI agent with popular apps and services. Common uses include managing Gmail, Calendar, Drive, Slack messages, GitHub issues/PRs, calendars, web searches, weather forecasts, and summarization of files/URLs. The goal is to boost productivity for tasks like email handling, project tracking, smart home control, and self-improvement behaviours while keeping the AI setup local and customizable.

Check out ClawHub

Note: ClawHub and similar AI agent tools carry significant security risks, especially on university devices. Learn more about the Privacy Impact Assessment (PIA) process at UBC.


Questions and Answers

Each studio ends with a question and answer session whereby attendees can ask questions of the pedagogy experts and technologists who facilitate the sessions. We have published a full FAQ section on this site. If you have other questions about GenAI usage, please get in touch.

Generative AI is reshaping assessment design, requiring faculty to adapt assignments to maintain academic integrity. The GENAI Assessment Scale guides AI use in coursework, from study aids to full collaboration, helping educators create assessments that balance AI integration with skill development, fostering critical thinking and fairness in learning.

See the Full Answer - ### How can I use GenAI in my course?

In education, the integration of GenAI offers a multitude of applications within your courses. Presented is a detailed table categorizing various use cases, outlining the specific roles they play, their pedagogical benefits, and potential risks associated with their implementation. A Complete Breakdown of each use case and the original image can be found here. At

See the Full Answer