1. Introduction
Road accidents claim over a million lives every year worldwide, with countless more left injured. Timely accident detection can drastically reduce response times for emergency services, potentially saving lives and minimizing damage.
In this article, I’ll walk you through how I built an AI-powered accident detection system — entirely using Java for preprocessing, cloud integration, and prediction calls — combined with Google Cloud Vertex AI for training and deploying a custom object detection model.
This work is not just a technical project; it has real-world societal impact, aligning with public safety priorities and contributing toward innovations that support national interests in road safety and AI development.
Architecture Diagram
Github Link:
https://github.com/lalamanil/AccidentDetectionModelJavaVertexAI
2. Technology
Here’s the stack I used and why:
3. Dataset Preparation
3.1 Collecting Dash cam Accident Videos
The first step was collecting dash cam accident videos from publicly available datasets and open-license online sources. These videos were downloaded locally for preprocessing.
3.2 Extracting Frames from Videos
The model works on still images, so I needed to convert video footage into image frames.
Using OpenCV’s Java API, I wrote a utility class
ExtractFramesAsImagesFromVideo.java to:
- Load the input video file.
- Iterate frame-by-frame.
- Save each frame as a .jpg file.
Include below dependency in pom.xml
Below is the java code to extract frames(Images) for a given Video
NOTE: This process transformed each accident video into hundreds of labeled image frames.
3.3 Annotating Images
With the extracted frames, I used makesense.ai to annotate:
Label 1: accident (frames showing accidents).
Label 2: normal (frames without accidents).
Annotations were created by drawing bounding boxes around accident scenes and exporting the result as a CSV file.
Below are the some sample screen shots, How to draw bounding boxes around images using web tool https://www.makesense.ai/
Load Image frames from local machine. Here I have loaded 4 images for demo and selected Object Detection.
Create a Labels to assign to objects and click on Start project.
Now draw bounding boxes around accident scenes
Once bounding boxes were drawn over accident scene. Click on Action to download annotated CSV file for all images.
Select Export Annotations
Select Single CSV file check box
CSV file will be downloaded to your local machine.
NOTE: However, the exported CSV had pixel-based coordinates, while Vertex AI requires normalized coordinates.
4. Converting Annotations to Vertex AI Format
Problem:
makesense.ai output → (x_min, y_min, width, height) in pixels.
Vertex AI expects → normalized coordinates between 0 and 1.
Solution:
I wrote ConvertMakeSenseToVertexAIAutoMLCSV.java to
Read the makesense CSV file.
Normalize coordinates:
double xmin = x / imageWidth;
double ymin = y / imageHeight;
double xmax = (x + width) / imageWidth;
double ymax = (y + height) / imageHeight;
- Save the converted file in Vertex AI–compatible format.
Below is the Vertex AI compatible Annotated CSV file generated from above scrip.
5. Uploading Data to Google Cloud Storage
With images and annotations ready, I created a G*oogle Cloud Storage bucket*:
Uploaded images and normalized annotation CSV to respective folder with in the bucket
images/ → all .jpg frames.
annotations/ → normalized annotation CSV.
I have written GCSStorageUtility.java to upload both images and CSV to GCS via Java client libraries.
Prerequisite:
Create a service account in Google cloud console and provide below roles.
Add below dependency to pom.xml
Images are uploaded to GCS bucket under images/
Make sure to prefix image path with GCS image folder path as show in below screen shot before uploading annotated CSV to GCS bucket.
6. Importing Data into Vertex AI
*Vertex AI *→ Datasets in the Google Cloud Console. Create the Dataset
Import the data (Images) stored in Google Cloud Storage via Annotate CSV file store in GCS bucket
Vertex AI automatically linked each annotation with its image. Verified the images and bounding boxes were correctly displayed.
7. Training the Model
Trained a Custom Object Detection model in Vertex AI:
Selected dataset
Chose the training budget in node hours based on dataset size.
Training completed successfully, producing an evaluated model with precision and recall metrics.
8. Deploying the Model
Once trained, Deployed the model to an endpoint: Configured 1 active node for real-time predictions. It will generate Endpoint ID.
Used the Vertex AI Console to upload test images.
Received bounding boxes, labels, and confidence scores in real time.
9. Integrating Predictions into Java
Using PredictionServiceClient, Integrated the deployed model into a Java application:
Passed the image bytes to the Vertex AI endpoint.
Parsed the prediction results.
Extracted:
Bounding box coordinates.
Confidence score
Detected label.
Add below dependency to pom.xml
10. Results
The model successfully detected accident frames with high confidence.
Bounding boxes matched expected accident locations.
Below are example before/after images with predicted bounding boxes.
11. Societal Impact and NIW Alignment
Road accidents are a pressing public safety concern. AI-powered detection systems like this can:
Enable faster emergency response.
Support traffic monitoring in smart cities.
Help in forensic analysis of accident causes.
This project demonstrates technical expertise in AI and cloud systems while delivering societal benefits — a combination directly aligned with the National Interest Waiver’s goals of fostering innovation that benefits the United States.
12. Future Enhancements
Integrate real-time streaming from live dashcams.
Expand dataset diversity for improved robustness.
Use GCP Pub/Sub for automated accident alerts.
Deploy on edge devices for in-vehicle detection.
13. Conclusion
From raw dash cam footage to a deployed AI model, this project shows how Java and Google Cloud Vertex AI can be combined to deliver an intelligent accident detection system.
It’s a clear example of applying AI for good — advancing both technological capability and public safety goals.
Top comments (0)