I like working for startups. The main reason it that you create new things. Things that haven't been created before. The other reason is that you have a lot of freedom in your technical choices and you can choose the tools that are the most appropriate for the situation (which is not always the case in big companies).
However, working for startups brings many challenges. The main one is that you want to move fast. In software that often means you have to reduce code quality. That will slow you down over time, but...you have to move fast.
Another challenge is that you don't know your domain well and the problem to solve. Actually, you want to solve a problem, but you can discover that your users want to solve another problem with your product. Therefore, you might have to redesign things.
How Amazon Q Developer can help?
Amazon Q Developer is a GenAI Coding assistant, provided by AWS, which generates code from a natural language description, but it's also used for code explanation, refactoring, optimizing, securing, ...
As an architect, it's also a very good "architecting" assistant. It can generate code from a diagram but also a diagram from the code, which for me is a great new capability.
As of today, my intuition makes me feel it's a super powerful tool to increase code quality and ease software design evolution.
That's what I try to demonstrate in this article.
The context
I recently worked for a very small startup, which needed to analyse videos and especially audios from these videos.
This doesn't sound difficult today, but it's not trivial to implement. Especially when you start from scratch and when the application will be used by real users one week later!
By chance the front end was already developed with a No Code solution, however a complete backend had to be built on AWS.
The Architecture starting point
When you start a project, people often come with a design of the solution and we can have the feeling that if you follow that solution, everything will be working well.
That never happens in software!
Here a simplified representation of the AWS solution that I was proposed before I started the project.
That does not look very complicated, but I did not want to fall into a too early design trap, I decided to start with a very simple AWS backend using: serverless technologies, a lambda-lith and the AWS CDK.
The Foundational Architectural Decisions
Why serverless?
AWS pioneered serverless technologies, and it's not only lambda functions. For instance, Amazon S3 (the AWS object storage service) is considered as serverless, because you do not manage any server.
Not only you manage less servers, but you also manage less availability, less security, less scalability, less network, ... all these things are AWS's problems.
That makes it an ideal choice for startups which want to move fast.
Why a lambda-lith?
Building a distributed architecture based on events can be hard, but also difficult to maintain, debug and evolve.
I like the idea of "Monolith First", popularized by Martin Fowler, and I wanted to apply this to my serverless architecture, because I think it brings the same benefits.
So, I started to write the algorithm in one lambda only, instead of multiple lambdas.
Why the AWS CDK?
The CDK (Cloud Development Kit), is a library that helps you build infrastructure as code with imperative/Turing complete languages like python, java,... instead of declarative languages like Yaml, Json, ...
That brings more possibilities, and you can build things much faster with more safety because CDK components "encapsulate" multiple low level components and best practices.
Among many CDK benefits, I do not have the feeling that I spend more time playing with IAM Policies to apply least privilege principle than building my architecture.
For example, when I write
...
lambda.grant_read(s3_bucket)
...
This simple line of code will handle all the policy generation with just enough privileges. That saves a lot of time and brings more serenity.
The quick and dirty phase
The main architecture constraint was the performance of the analysis. Therefore, I first needed to create a workflow that would be realistic enough to prove the choices were the best regarding performance. I didn't want to discover the wrong choices were made after one week of work!
That's where lambda-lith is interesting because you don't focus on distribution first, you just concentrate an measure what is interesting for you. I know that distribution can increase performance, but do we need it now? I think that the old adage: "Premature performance is the root of all evil" is still relevant in the cloud!
This is where I started to use Q, and asked it to help me create my Infrastructure with the CDK V2.
@workspace generate CDK v2 template from the draw.io diagram
I chose to use a draw.io diagram for that, because it is less text to write and give to Amazon Q. More on that in my previous article: https://dev.to/welcloud-io/from-diagram-to-code-with-amazon-q-developer-2da4
Since I had a bit of algorithm to develop inside the lambdalith, I asked Amazon Q to help me.
Then I tested the entire workflow and I proved we were under the expected duration.
Rewrite almost all code!
When the quick and dirty phase was finished and that I could prove performance was not an issue with this architecture I decided to rewrite most of my lambda code!!!
Remember I produced quick and dirty code, and leaving this code as is, was risky. After a just few days, it could have been more and more difficult to maintain or change because it's hard to code fast and keep good quality (at least it's hard for me).
But actually... I didn't rewrite the code myself... I asked Q to rewrite it!
I did it in 3 steps:
- Generate the sequence diagram from existing code
- Generate new code from the diagram
- Generate unit tests from code.
Generate the sequence diagram from existing code
First, I asked Q to create sequence diagram from my quick & dirty code.
It was not perfect I had to change a few things to the sequence, but not that much.
The diagram can primarily be used as a documentation. But, it can also be used for generating new things... like code and unit tests.
So, let's do it!
Generate new code from sequence diagram
It's not very complicated to do, you just asked Q:
Generate code from that mermaid diagram
[mermaid diagram as code]
The benefit for me is that Amazon Q is a better developer than I am, and I can see a better written and a better organized code very quickly.
Generate unit tests from code
Unit tests are a simple concept, they are automated tests that you execute to make sure your code is still working after you change it.
This is often something we wish we had. However, behind an apparent simplicity it's not easy to write them or to master techniques like TDD (Test Driven Development). Therefore, these tests are often overlooked.
But with GenAI, you have no excuse. Generating them is easy with Q:
write unit tests for that code
And Q generated a lot of tests (4 files in total and ~ 250 lines of tests)
Then I executed the generated tests. Not every test was working right away, but it took less than 1/2 h to do the fixes and make them all pass!
Some tests are very interesting, for example I had to poll a third party provider, and Q generated a test to evaluate the timeout case, when after a certain number of polls you stop calling the third party provider and raise an error.
That's something I wouldn't have thought about, or at least it would have taken me more time to write. And it's important because if you do not have a timeout mechanism you will consume lots of resources for probably nothing.
Another interesting test is the high level test (the lambda handler test), which execute my entire workflow to detect general errors.
I also completed manually this test suite with a simple integration "end to end" test, where I could put an audio file in my input bucket and verify the analysis in the output bucket, again with Amazon Q's help.
So, now, my architecture looks like this:
The feedback loop & code refactoring
With test harnesses you can put in place a very short feedback loop. Therefore, every change that breaks something in the workflow will be detected immediately and you can roll it back right away.
What's the advantage?
First, in the software development process, when you discover that something is broken one day or even one hour after the mistake was made, it's often much harder to fix it compared to fixing it when it was just introduced. Unit tests are a way to move faster.
Second, without test harnesses, even simple things like renaming objects in your code, and that can happen quite often when you don't know your domain well, can become very difficult, because you fear the application will break. So you don't take the risk and leave inconsistent names, making your production harder to understand and maintain. Unit tests are a way to redesign things with more confidence.
Setting up your environment for a short feedback loop
One trick I use is, each time I save a file, all my test suite is executed. With pytest I just had to install this plugin:
pip install pytest-xdist
So, now each time I modify a file, it runs all my tests automatically, each time I make a mistake I roll back immediately.
Here is an example of my test suite running when everything is green:
Here is an example when it breaks, and how it spots the issue:
The feedback is very quick, and that's very comfortable.
Ask Amazon Q Inline chat to refactor your code
From my point of you, refactoring is at the heart of software development. It's very important, but almost never practiced.
Refactoring is a technique where you improve the quality/evolvability of your code, without changing its behavior.
Refactoring is not trivial, it takes time and requires reflection, but today you have a very good friend to help you out: Amazon Q Developer!
Q Developer has an impressive feature that I use to refactor my code: Q Inline Chat.
For example, I simply select the chunk of my code I want to refactor, then I press Ctrl-I, and type "refactor" in the dialog box:
It proposes a refactoring that I can accept or reject. In this screenshot below, what is in green is the new code, what is in red if the code that will be removed!
That's what I like with refactoring, your code becomes much clearer, easier to understand and maintain.
Now that the code is refactored I can check if my tests are still passing.
If I am satisfied, I commit the update and even deploy it into production if I want.
Amazon Q and test harnesses is a good combination to refactor your code within minutes, but I experimented another technique using Q and tests I found very interesting. Let's explain it.
Solve issues with Amazon Q test generation
During this project, I had an issue that was hard to fix for me.
I had two pieces of code that were supposed to do the same thing, but one was working and the other wasn't. And I couldn't see the differences.
This was a new piece of code that was not covered previous tests.
So, I first created a test for the function that was working using Q:
Then, I launched the tests against my working code (the one I gave to Q), to make sure the test was green.
Then, I replaced my working code with the non working code in my function, and I ran the test again.
The test was red as expected, and it showed me differences in an input data format! In fact, this input data format was slightly different between the two versions of the code.
Then, I fixed it, and... here we are... this error will never happen again! Or at least it will be detected very soon.
The final architecture
After a few iterations and many modifications (safeguarded by tests), I landed with a final architecture.
Here is the (too early) solution architecture I was proposed first:
Here is what I really started with:
From there, I used Q to generate the infrastructure, the code and the unit tests. Then I refactored the code, solved issues, still using Q.
By the way, we discovered the frontend could not extract audios as expected (which was not in the 'too early' architecture).
So I introduced AWS Media converter (another serverless service) in the architecture, and here is what I ended up with:
This architecture was built on time, it was production ready and has been used by real users!
That was a challenge I could overcome while keeping a good quality and evolvability thanks to my developing/architecting assistant. Maybe that will be normality in the near future, but today... it blows my mind!
Conclusion
This project was ideal to demonstrate the benefits of Amazon Q Developer to kickstart new and innovative projects with good practices.
With Amazon Q Developer, I could rewrite dirty code, add unit tests and put in place a short feedback loop. I also used it to refactor my code in order to keep a good quality and allow evolutionary design. I don't think that would have been possible without it in such a short amount of time.
Of course, I don't think we can overcome all startup challenges with Amazon Q Developer, there are many other dimensions. I am also wondering if that could apply each time or to bigger projects. I don't have enough experience with it yet. But again, that looks very promising!
Top comments (0)