OVERVIEW/BEGINNER/HATE_SPEECH
Beginnerstable00_/_BEGINNER

Hate Speech Detector

A single-agent CrewAI pipeline that analyzes text for hate speech or offensive language, classifying it with confidence and reasoning.

Py_3.12CrewAIOpenRouterVIEW SOURCE
# INPUT_PROMPT // HATE_SPEECH
SCHEMA: V.1.0
system@agent-archive:~$
IDLE
OUTPUT_LOG:

[IDLE] Awaiting input. Press RUN DEMO to initialize the pipeline.

LIVE_EXECUTION_LOG // hate-speech-detector0 EVENTS

[STREAM] No active execution. Trigger a run to populate the log.

/SYSTEM_INTERNALS

ARCHITECTURE

SYSTEM_ARCHITECTURE // CREWAI_SINGLE_AGENT_PIPELINE
INPUT
field: text (textarea)
Raw text string to analyze for hate speech or offensive language
CREW
agents: [hate_speech_detector]
tasks: [hate_speech_detection_task]
verbose: false
crew.kickoff(inputs={"text": Text})
AGENT
role: Hate Speech Detector
goal: Analyze text and identify hate speech
backstory: "You are a Hate Speech Detector for Twitter who understands details very well..."
LLM
model: gpt-4o
provider: OpenRouter
TASK
description: 5-step analysis pipeline
1. Read text carefully
2. Identify targeting language
3. Look for threats / dehumanizing language
4. Evaluate context
5. Make objective classification
expected_output: structured analysis
STRUCTURED OUTPUT
classification: hate speech | no hate speech
confidence: high | medium | low
targeted_group: race, gender, ...
key_phrases: [extracted phrases]
reasoning: 2-3 sentence explanation

/BACKEND_CONTRACT

PLUGGABLE RUNTIME

This demo serves a hardcoded fixture today. Set NEXT_PUBLIC_USE_REAL_BACKEND=true and the same UI will POST /api/beginner/hate-speech-detector to a real CrewAI service - no UI change required.

/TYPED_INTERFACE

runProject(HATE_SPEECH, {
  text: string,
}) -> markdown