This is a simplified guide to an AI model called Flux-Content-Filter maintained by Lucataco. If you like these kinds of analysis, you should join AImodels.fyi or follow us on Twitter.
Model overview
The flux-content-filter provides content filtering capabilities for AI-generated images and text prompts. This model serves as a critical safety layer for image generation workflows, detecting NSFW content, copyright concerns, and public figure references. Created by lucataco, it wraps the Black Forest Labs FLUX content filter and uses the Pixtral-12B vision-language model for intelligent content analysis. Unlike generation models such as flux-dev or flux.1-turbo-alpha, this model focuses on content moderation rather than creation.
Model inputs and outputs
The model accepts both images and text prompts for analysis, offering flexibility for different content filtering scenarios. It provides comprehensive filtering results with confidence scores and detailed flagging information across multiple categories.
Inputs
- image: Input image file to analyze for NSFW content, copyright concerns, or public figures
- prompt: Text prompt to examine for copyright references or public figure mentions
- nsfw_threshold: Adjustable sensitivity threshold for NSFW detection (0.0 to 1.0, default 0.85)
Outputs
- image_flagged: Boolean indicating if the image contains problematic content
- text_flagged: Boolean showing if the text prompt raises concerns
- overall_flagged: Combined flag indicating any content issues
- nsfw_detected: Specific NSFW detection result
- copyright_concerns: Flag for copyrighted material detection
- public_figures: Identification of public figure references
- details: Additional information including confidence scores and error messages
Capabilities
The model performs multi-modal content...
Top comments (0)