Smarter Moderation by Design

Smarter Moderation by Design: An AI-Lensed Human-Centered Evaluation of Discord’s AutoMod

As someone constantly exploring and judging new AI products, my friend and I recently took a closer look at Discord’s AI moderation tool: AutoMod. Our goal was simple, to understand and evaluate Discord’s AI feature and if it is accessible or just a faff.
Well, also this isn't just about Discord. We are in the process of creating a guide on how to thoughtfully evaluate any AI-driven feature. This time we are using Discord’s AutoMod as a clear, practical example.

Why Discord’s AutoMod?

What if hate speech never made it to your chat in the first place?
That’s exactly what Discord aims to achieve with AutoMod. Unlike typical moderation tools that clean up after harm has already occurred, AutoMod intercepts harmful content before anyone sees it. It sounds bold, almost idealistic, but we wanted to know if it truly delivers.

For this, we used Google's PAIR framework (People + AI Research) to break down AutoMod’s effectiveness clearly and simply. Here’s what we checked for:

  • Real Value – Does the AI genuinely solve important problems?

  • Transparency about Privacy – Is it clear what happens to user data?

  • Simplicity – Is it easy enough for anyone to understand and use?

  • Accountability – Can users correct AI errors easily?

  • Safe Customization – Can users freely adapt the AI to their needs without worry?

Here’s what we found:

1. AI Must Actually Help

Well let’s start simple. AI Should add real value and not just for namesake. For Discord, in a channel of about 10k+ users, moderators handle thousands of daily messages, sifting through endless spam, and catching offensive content before it hurts someone. It’s overwhelming work. AutoMod adds significant value here by instantly flagging harmful content, hate speech, explicit images, spam, so moderators don’t have to manually police every message. It frees moderators to engage more meaningfully with their communities, preventing burnout and frustration.

2. Be Honest About Privacy

Privacy isn’t optional, especially when AI tools scan personal conversations and images. Discord’s AutoMod scans nearly all user-generated content, but here’s the thing, Discord doesn’t clearly explain what happens next. Is your message logged? Does it help train AutoMod further? The general privacy policy isn’t specific enough. Vagueness can quickly erode trust, and in AI-driven moderation, trust is everything. This transparency gap made us uneasy.

3. Keep It Simple: Explain Benefits, Not Tech

When AI feels like a complex experiment, users shy away. Discord clearly understands this principle. AutoMod’s instructions focus on outcomes everyone understands—like "Automatically block explicit images," rather than technical jargon like "Deploy computer-vision algorithms." This makes AutoMod accessible to moderators regardless of their tech background, meaning more people actually use and benefit from it.

4. Allow Users to Correct Mistakes

AI make mistakes. This is inevitable. What matters is whether people can step in to correct them. Discord openly acknowledges AutoMod isn’t perfect, giving moderators the power to override false positives (safe messages mistakenly blocked) and catch false negatives (harmful messages slipping through). Moderators can easily tweak settings or provide feedback, turning AutoMod’s imperfections into opportunities for improvement.

5. Enable Safe Customisation and Exploration

Every online community is unique. Some servers are family-friendly, others edgy and playful. Discord smartly designed AutoMod to adapt to this variety. Moderators can safely test new moderation settings. Like adjusting sensitivity, experimenting with custom rules, and changing filters, without risking major disruptions. AutoMod doesn’t feel rigid; it feels flexible and empowering. This flexibility encourages moderators to tailor moderation precisely to their community’s personality and values.

Why Does This Matter Beyond Discord?

AutoMod isn’t perfect, but it’s an instructive example of thoughtful AI moderation. Whether you’re designing, evaluating, or choosing AI tools, these principles provide a simple, powerful guide. AutoMod illustrates clearly what good AI design looks like and just as clearly, where it can go wrong.

If you care about building responsible, ethical, and effective AI tools, Discord’s AutoMod provides valuable lessons that go far beyond a single platform.

Want to dive deeper into how we evaluated AutoMod, or use our method to assess your own AI products? Connect with me on LinkedIn or drop me a message. I'd love to chat more about thoughtful AI design.

Check other posts

I aim to create culturally-sensitive solutions that resonate across diverse markets.

I actively speak and write about emerging tech and how design can aid its adoption.

Lead Design Experience

Jun'23 - Present

How Does it Matter, Consultancy

Oct’22-Jun’23

Boom Web3

May’22– Aug’22

Digital Product School, Germany

Sep’21 – Dec’21

Twigoh, USA

Aug’20- Jun’22

Memboro

Guest Speaker @

Dec’23
IGDTUW

Future of UX and evolving career opportunities

Aug’23
IIIT 'Delhi

Role of design thinking in business development

Aug’23
IIIT 'Delhi

Pitch decks and storytelling

Jun'23
IIT 'Delhi

Who are we building for and why

Nov'22
Draper Startup House

Creating a design ecosystem in new startups and how to hire