π Introduction
With the rise of AI, analyzing images for violent content is now possible! I built a Violence Detection App using React.js, APILayer API, and Imgbb to help users identify potentially harmful images before sharing them online.
π Live Demo
π GitHub Repo
π― How It Works
1οΈβ£ Upload an image (or use Imgbb to generate a URL).
2οΈβ£ Analyze the image using the APILayer Violence Detection API.
3οΈβ£ Get a detailed risk assessment based on AI analysis.
π‘ Risk Levels:
β
Safe (Very Unlikely or Unlikely to contain violence).
β οΈ Needs Review (Possible violence detected).
π¨ Flagged (Likely or Highly Likely to contain violence).
// Fetching image analysis result from APILayer
fetch(`https://api.apilayer.com/violence_detection/url?url=${imageUrl}`, {
method: "GET",
headers: {
apikey: process.env.REACT_APP_API_KEY,
},
})
.then((response) => response.json())
.then((data) => console.log(data));
π¨ Cool Features
β
Broken border design around analysis steps.
β
Animated "Go Back" button for smooth user experience.
β
Easy-to-use image upload system (Imgbb integration).
β
Professional UI/UX with real-time analysis results.
π₯ Building This Yourself?
πΉ Fork the GitHub repo, add your APILayer API key, and deploy it!
πΉ Feel free to improve or add features! Contributions welcome.
π₯ Final Thoughts
This project can be useful for social media platforms, parental control apps, and content moderation tools. AI-powered safety measures can help prevent exposure to harmful content online.
π¬ What do you think? Drop a comment if you have ideas for improvement! π
Top comments (0)