By: Dave Rosenlund, Global Director of Software & Solutions at Trundl
AI tools like ChatGPT, Rovo, Claude, and Cursor are rapidly reshaping how teams work. Atlassian Rovo will have a huge impact on the work that happens in Jira, Confluence, and more.From smart summaries to AI-assisted updates and automation, the promise is clear: faster work with less friction.
But when AI shifts from the assistive to the default, when teams start trusting AI-generatedcontent without question, new risks emerge.
This post explores the growing concern of AI overuse in Atlassian tools, the behavioral patterns that signal over-reliance, and how to build guardrails that keep your team aligned and accountable.
AI in the Atlassian Teamwork Collection
As Atlassian rolls out generative AI across its cloud platform, teams are testing features like:
Smart summaries and status updates in Atlassian Home
Auto-filled field suggestions in Jira work items
AI-powered page creation on Confluence pages
These tools are helpful — when used thoughtfully. But I’ve started seeing subtle shifts:
AI-generated updates copied into Jira or Confluence without review
Content published without clarity on whether it came from a human or the bot
Decisions made based on summaries that sound complete but miss the nuance
Behavioral Risks of Over-Reliance on Rovo and Friends
Technical risks like security and data governance are well known. But the behavioral risks of GenAI use in Atlassian tools are growing just as fast:
1. Trusting the output too much
Content that “sounds good” gets published without being validated.
2. Losing ownership
When the AI writes something, who owns it? Who checks it? Silence is risk.
3. Documentation debt
AI can flood your Confluence space with pages. Not all of them help.
4. Context collapse
Jira work items filled by bots might lack the insight or intent that humans bring.
5. AI workslop
Coined by Harvard Business Review, “workslop” refers to AI-generated content that clogs systems with vague, polished, low-value text. It looks right, until you need to use it.
How to Use Atlassian AI Tools Without Losing the Plot
To avoid drifting into default AI behavior, we recommend:
Labeling AI-generated content in Confluence, Jira work items, and internal documentation
Building review into your workflow — don’t treat bot output as final
Defining team-level AI usage norms (don’t wait for corporate policy)
Emphasizing judgment over speed
These steps are especially important as Atlassian AI features expand and new automation possibilities open up.
Want the Full Analysis?
In my recent Atlassian Community article, I share the five most common AI overuse patterns we’re seeing, offer practical guidance for PMs, admins, and tech leads on how to keep humans in the loop, and introduce a framework for spotting invisible AI drift before it becomes a risk.
👉 Read the full post: When AI Stops Helping and Starts Hiding Risk
Final Thought
AI in Atlassian tools is powerful — and still evolving. But the biggest risks aren’t technical. They’re how we change how we work when we stop thinking critically about what the AI gives us.
Use it. Just don’t forget to stay human.
Dave Rosenlund is an Atlassian Community Champion and the founder of the virtual Atlassian Community Events (ACE) chapter, CSX Masters – fka ITSM/ESM Masters. He’s also a founding leader of the Program/Project Masters chapter and part of the Boston ACE leadership team. In his day job, he works with an amazing cast of colleagues at Platinum Atlassian Solution Partner, Trundl.

