Today, almost all popular programming languages have a GraphQL implementation. Of course, the JS community has developed this topic the most. Even though Java is more of a catch-up position, it’s not all bleak and depressing. These solutions can be used safely and are mostly time-tested and production-ready.
GraphQL-java
You should focus your attention on the Graphql-java library first. This is the sole and authentic GraphQL engine available. So, regardless of the frameworks you employ, this liba will ultimately still be used in the intestines. The engine already has the ability to implement data fetching, working with context, handling errors, monitoring, query restriction, field visibility, and even a dataloader. As a result, you can choose to use it as is or make bold changes to the frameworks to see which works best for you. Graphql-java is open source, created by regular guys, and the most recent commit was only a few days ago. This engine is actively being developed overall.
However, despite all the advantages, you should think carefully about whether it is worth using it directly. We don’t use it. This library is low-level, flexible, and therefore verbose. Frameworks also help to cope with this. Of course, the engine can be used directly, but it will be less convenient.
In addition to this library, I’ve found three other frameworks that demand consideration. Everything else consists primarily of very small libraries.
Schema-first vs Code-first
But first, let’s look at two key approaches to designing a graphql API on a backend. There are two opposing camps — schema-first and code-first solutions.
In the classic schema-first approach, we first describe the graphql schema and then use it in the code to implement the models and data fetchers. The advantages of this approach are that different people and even departments can design and develop the scheme — for example, analysts design the scheme, and the developers implement it. It can also be convenient to write a scheme and immediately give it to customers, and develop a backend at the same time. The disadvantage is the need to implement both the schema and the code — it can take a little more time when developing the API + now there are 2 sources that must not conflict with each other and be completely synchronized — an extra link that can break.
With the code-first approach, we write only the code and based on the annotations, the framework itself generates the schema. Here we have only 1 source of truth, but you can’t build a graphql diagram without code.
Domain Graph Service
And the first framework that we will pay attention to is DGS(Domain Graph Service). If you’ve been to Paul Becker’s talk at JPoint 2021, you already know what I’m talking about.
Netflix was originally invented in 2019, and in 2020 it was posted on opensource. And this is a full-fledged framework — it helps to work with serving GraphQL code, write unit tests, provides its own error handling, code-gen for generating data fetchers based on the schema, and so on. It’s a schema-first solution. And it’s all production-ready, Netflix is making full use of it.
Still, we chose a different solution.
First, DGS is schema-first, and we would like to use the code-first approach — easier to raise, a little faster to develop, there is no need to develop a schema without code.
Second, DGS uses spring boot. And that’s fine! But we do not use it inside the company — we have our own framework, which uses pure spring-core. Of course, this does not mean that it will not be possible to raise it — we managed to start, having previously talked with Paul on the topic of whether to raise the norms at all without a boot or the authors do not recommend (norms). But to do this, it was necessary to understand the code of the framework itself, to find and declare manually with a dozen undocumented and not always understandable bins, which in new versions of DGS can break. In general, not free to maintain.
And thirdly, even though it is a full-fledged framework, you will still have to add it to work with unit tests, error handling, monitoring, etc. Simply because your project is growing and you will not have enough existing solutions.
Still, it’s very cool. Therefore, we marked it for ourselves with an “asterisk” — we decided that in case of anything we would return to it.
DGS:
- schema-first
- opensource from Netflix
- On Spring-boot
- Full-fledged framework
Java SPQR
The next liba that we will analyze is Java SPQR.
An open-source library proven over the years. In addition, this is also the only code-first solution, moreover, not a full-fledged framework, which is quite cool. All this liba does is implement a code-first approach and help you work a little bit with the serving GraphQL code. We were absolutely satisfied with this, and we settled on it.
But despite our choice, it is difficult to advise using it at the moment, because it has been abandoned. The last commit was more than a year ago, there were no answers to issues, there is no support either.
Why this may be important is as an example, graphql supports inheritance, and in 2020, the graphql-spec, and then graphql-java, picked up the ability to work with multiple interface inheritance. And now it’s 2022, but in SPQR you can not use this new feature.
However, more recently, the menteiner replied about plans to resume work on the project, which can not but rejoice.
Spring GraphQL
The last framework I want to talk about is Spring GraphQL.
Quite fresh, released in July 2021. Josh Long talked about it at the Fall 2021 Joker. Also schema-first approach, integration with spring (thanks to cap), slightly repeats DGS — also has its own error handlers, support for writing unit tests, more convenient work with data fetchers.
Spring GraphQL:
- Schema-first
- Spring Integration
- Full-fledged framework
- Released recently
So what does that look like?
Now let’s create a simple graphql server. As a standard stack, we will use Java and Spring, and as GraphQL — SPQR, which uses the Graphql-java engine.
GraphQL bean
First, let’s create the main GraphQL bin that will execute all queries.
@Configuration
public class GraphQLConfig {
private final CandidateResolver candidateResolver;
private final ResumeResolver resumeResolver;
public GraphQLConfig(CandidateResolver candidateResolver,
ResumeResolver resumeResolver) {
this.candidateResolver = candidateResolver;
this.resumeResolver = resumeResolver;
}
@Bean
public GraphQLSchema getGraphQLSchema() {
return new GraphQLSchemaGenerator()
.withBasePackages("com.example.graphql.demo.models")
.withOperationsFromSingletons(candidateResolver, resumeResolver)
.generate();
}
@Bean
public GraphQL getGraphQL(GraphQLSchema graphQLSchema) {
return GraphQL.newGraphQL(graphQLSchema)
.queryExecutionStrategy(new AsyncExecutionStrategy())
.instrumentation(new CustomTracingInstrumentation())
.build();
}
}
To execute it, it needs to know the schema — — but since SPQR is a code-first approach, we use a schema generator that will build it from the model fields from the root package.
Next, we’ll define a graphql query execution strategy. By default, each node in the graph is executed asynchronously and is responsible for this , which in case of what can be changed.
After that, let’s redefine the tools (we’ll talk about them separately) and run the bin.GraphQLSchemaExecutionStrategyAsyncExecutionStrategy
Endpoint
We need to get the request from somewhere, so let’s create a regular POST method that takes query. It will be the same for all graphql requests, unlike REST, where we made a separate method for each request.
And then we’ll pass the execution request to the graphql bin.
@RestController
public class DemoController {
private final GraphQL graphQL;
@Autowired
DemoController(GraphQL graphQL) {
this.graphQL = graphQL;
}
@PostMapping(path = "graphql",
consumes = MediaType.APPLICATION_JSON_VALUE,
produces = MediaType.APPLICATION_JSON_VALUE)
public ExecutionResult graphql(@RequestBody EntryPoint entryPoint) {
ExecutionInput executionInput = ExecutionInput.newExecutionInput()
.query(entryPoint.query)
.build();
return graphQL.execute(executionInput);
}
public static class EntryPoint {
public String query;
}
}
Entry points
We’ve described a schema, we know how to accept queries — but where do you describe the entry points to this graph? Data Fetchers (or resolvers) are responsible for this in graphql — the bean in which we will describe the nodes of the graph.
@GraphQLQuery(name = "candidates")
public CompletableFuture<List<Candidate>> getCandidates() {
return CompletableFuture.supplyAsync(candidateService::getCandidates);
}
In this case, we created an entry point , which returns some model of .candidatesCandidate
public class Candidate {
private Integer id;
private String firstName;
private String lastName;
private String email;
private String phone;
// getters and setters are omitted
}
Moreover, it is on the models in the resolvers that SPQR will build a scheme.
Of course, it is possible and necessary that there are as many such nodes as possible, so that they intertwine with each other, creating a graph. So let’s create another node and link it to the candidates using . resumes@GraphQLContext
@GraphQLQuery(name = "resumes")
public CompletableFuture<List<Resume>> getResumes(@GraphQLContext Candidate candidate) {
return CompletableFuture.supplyAsync(() -> resumeService.getResumes(candidate));
}
public class Resume {
private Integer id;
private String lastExperience;
private Salary salary;
// getters and setters are omitted
}
public class Salary {
private String currency;
private Integer amount;
// getters and setters are omitted
}
It works like this — if you ask for something from , only then will this resolver work.candidatesresumes
Instrumentation
Among other things, we will certainly want to monitor the status of the query execution: how long each resolver is executed, how long the full request is executed, what errors we can catch. To do this, when registering a graphql-bin, you can prescribe Instrumentations — both default and custom.
Technically, this is a class that implements (in our case, inherited from , a regular stub so as not to implement all methods).
It spells out the methods that are called in a certain state of the request: when the request has just begun to execute, when the resolver is called, when it has ended executing, etc. interface Instrumentationclass SimpleInstrumentation
CustomTracingInstrumentation
public class CustomTracingInstrumentation extends SimpleInstrumentation {
Logger logger = LoggerFactory.getLogger(CustomTracingInstrumentation.class);
static class TracingState implements InstrumentationState {
long startTime;
}
// Cоздаём контекст трэйсинга для конкретного запроса
@Override
public InstrumentationState createState() {
return new TracingState();
}
// Выполняется перед каждым запросом. Инициализируем контекст трейсинга для замеров времени выполнения
@Override
public InstrumentationContext<ExecutionResult> beginExecution(InstrumentationExecutionParameters parameters) {
TracingState tracingState = parameters.getInstrumentationState();
tracingState.startTime = System.currentTimeMillis();
return super.beginExecution(parameters);
}
// Выполняется при завершении запроса. С помощью totalTime мерим время выполнения всего запроса
@Override
public CompletableFuture<ExecutionResult> instrumentExecutionResult(ExecutionResult executionResult, InstrumentationExecutionParameters parameters) {
TracingState tracingState = parameters.getInstrumentationState();
long totalTime = System.currentTimeMillis() - tracingState.startTime;
logger.info("Total execution time: {} ms", totalTime);
return super.instrumentExecutionResult(executionResult, parameters);
}
// Выполняется при каждом вызове DataFetcher/Resolver. С помощью него будем мерить время выполнения каждого резолвера
@Override
public DataFetcher<?> instrumentDataFetcher(DataFetcher<?> dataFetcher, InstrumentationFieldFetchParameters parameters) {
// Так как любое поле в графе потенциально может быть резолвером, оставим только те, которые хотя бы что-то делают
if (parameters.isTrivialDataFetcher()) {
return dataFetcher;
}
return environment {
long startTime = System.currentTimeMillis();
Object result = dataFetcher.get(environment);
// Так как все ноды в нашем случае выполняются асинхронно, замерим время только для них
if(result instanceof CompletableFuture) {
((CompletableFuture<?>) result).whenComplete((r, ex); {
long totalTime = System.currentTimeMillis() - startTime;
logger.info("Resolver {} took {} ms", findResolverTag(parameters), totalTime);
});
}
return result;
};
}
// Ветьеватая логика получения имени резолвера и его родителя (для лучшего понимания откуда вызывалась нода)
private String findResolverTag(InstrumentationFieldFetchParameters parameters) {
GraphQLOutputType type = parameters.getExecutionStepInfo().getParent().getType();
GraphQLObjectType parent;
if (type instanceof GraphQLNonNull) {
parent = (GraphQLObjectType) ((GraphQLNonNull) type).getWrappedType();
} else {
parent = (GraphQLObjectType) type;
}
return parent.getName() + "." + parameters.getExecutionStepInfo().getPath().getSegmentName();
}
}
In fact, Instrumentation is quite powerful functionality that can be used not only for monitoring. For example, the graphql-java already implemented from graphql-java measures the maximum depth of the query and cancels the query if exceeded, and with the help you can put weights to specific nodes and control the complexity of the query (but there are nuances with it, we will talk about them in a separate article). MaxQueryDepthInstrumentationMaxQueryComplexityInstrumentation
This is enough to launch our service.
The request itself
{
candidates {
id,
firstName,
lastName,
phone,
email,
resumes {
id,
lastExperience,
salary {
currency,
amount
}
}
}
}
The response will be in the standard json format for the service
Conclusion
Here’s how graphql things are in the java world. We looked at different frameworks, evaluated their advantages and disadvantages, and then implemented a simple graphql service in Java. I hope you found it helpful.
Top comments (0)