If you benchmark JSON Schema validators in Java, one question matters more than it first appears:
does the validator work directly on your Java object graph, or does it first force you through a JSON tree?
SJF4J is built around the first path.
That is why the recent independent benchmark work from Creek Service is interesting.
Because SJF4J's story is not just "we are fast".
It is: we avoid work that many Java validation pipelines still force you to do.
An independent signal worth paying attention to
Creek Service maintains an independent comparison of JVM JSON Schema validators:
What makes this benchmark especially useful is that it looks at validation from two angles:
- pure validation against the JSON Schema test suite
- serde-style workflows, where data is serialized, validated, read back, validated again, and deserialized
The first tells you whether a validator is strong at validation itself.
The second tells you how it behaves in something much closer to a real application pipeline.
Why this is good news for SJF4J
Creek Service's results reinforce two things SJF4J has been designed around from the start.
1) SJF4J is fast at pure validation
First: SJF4J is simply strong at pure validation.
That matters because the baseline job of a validator is still the same:
if you already have the schema and already have the data, validation should be fast and boring.
That alone already puts it in very serious company.
2) SJF4J gets more interesting as the workflow gets more real
This is the part that makes SJF4J more than just another validator with a benchmark chart.
Most validators are still easiest to understand as JSON validators.
SJF4J is better understood as a structural data processor for Java object graphs that also includes JSON Schema validation.
Instead, it can validate directly over:
-
Map/List POJOJOJO- other OBNT-compatible object graphs
So when your application is already working with Java objects, SJF4J does not need an extra parse/build pass just to begin validation.
In other words:
less conversion, less representation switching, less wasted work.
And that is exactly the kind of advantage that shows up in serde-style benchmarks.
Why this matters beyond benchmark screenshots
Because the real question is not just:
who wins a benchmark?
It is also:
who forces the least unnecessary work into my application?
In many Java systems, validation is not an isolated toy benchmark. It sits inside a larger flow:
- HTTP request handling
- event ingestion
- Kafka or queue consumers
- persistence pipelines
- DTO ↔ domain transformations
- configuration or document processing
If your validation layer keeps forcing this shape:
POJO -> JSON/tree -> validation -> POJO/domain
then the validator is only part of the cost story.
SJF4J reduces that friction by keeping validation inside the same structural processing model used for:
- parsing
- navigating
- patching
- mapping
- validating
That gives you a simpler developer model and, very often, a faster runtime path.
A quick example
With SJF4J, schema validation can sit directly on top of a Java model:
@ValidJsonSchema("""
{
"type": "object",
"required": ["id", "email"],
"properties": {
"id": { "type": "integer" },
"email": { "type": "string", "format": "email" }
}
}
""")
public class UserDto {
public int id;
public String email;
}
SchemaValidator validator = new SchemaValidator();
ValidationResult result = validator.validate(userDto);
Or you can compile a reusable schema plan explicitly and validate native Java data directly:
JsonSchema schema = JsonSchema.fromJson("""
{
"type": "object",
"properties": {
"name": { "type": "string" },
"age": { "type": "integer", "minimum": 0 }
},
"required": ["name"]
}
""");
SchemaPlan plan = schema.createPlan();
Map<String, Object> data = Map.of(
"name", "Ada",
"age", 18
);
boolean ok = plan.isValid(data);
The key point is not just the API.
The key point is this:
validation happens over the object graph you already have.
SJF4J is not just a validator benchmark story
Another reason this matters: SJF4J is not a one-feature library.
It provides one structural model and one API family across:
- JSON / YAML / Properties parsing
- JSON Path navigation
- JSON Patch / Merge Patch processing
- JSON Schema validation
- object graph mapping
So if your application needs more than "just validate a schema", SJF4J can simplify more of the pipeline instead of becoming one more isolated dependency with one narrow responsibility.
Where to look next
If you want to explore SJF4J further:
- GitHub: sjf4j-projects/sjf4j
- Docs: https://sjf4j.org
- Validation guide: https://sjf4j.org/docs/validating
- Benchmarks: https://sjf4j.org/docs/benchmarks
And if you want the benchmark context directly:
- Creek Service comparison: https://www.creekservice.org/json-schema-validation-comparison/
- Creek Service performance page: https://www.creekservice.org/json-schema-validation-comparison/performance
Final takeaway
Creek Service's independent benchmark is a good reminder that JSON Schema performance is not only about raw validation speed.
It is also about how much extra work your validation architecture forces you to do.
SJF4J performs well in pure validation.
But the more important point is this:
when your data is already in Java object graphs, validating those object graphs directly is often the better design.
That is the design SJF4J is built around.
And that may be exactly why it feels so fast in real Java workloads.


Top comments (0)