DEV Community

dllewellyn
dllewellyn

Posted on

Gen AI + Already proven practices = better Dev teams

Using Generative AI to change how we do product engineer

A failure of imagination

Since the Generative AI starting gun was first fired, we've started to see a lot of people trying to do the things they already do better, but using AI. To be fair, the results are positive - the pragmatic engineer laid out some of the ways that people are already trying to use generative AI, especially github co-pilot, to make them more productive

https://newsletter.pragmaticengineer.com/p/ai-tooling-2024

But what's missing, is re-thinking the whole process from scratch

If I ask people what they wanted, they'd have said a faster horse - Henry Ford

It's hard to break out from the constraints that come from working in a particular environment, in a particular way for a long time - developers will look at how they can make themselves more productive and write higher quality software, product managers will look at how they can research better or build better personas, business analysts will look at capturing requirements and writing better tickets, and designers will look at how they can build their high-fidelity stories better.

Without getting people out of thinking of their current jobs, the plateau from AI is likely to come really quickly - its probably not going to be trusted to build large complex software any time soon, so all it can do it is make it a bit faster (or maybe a lot faster).

There are probably a few killer ideas out there, and I'm sure we're going to see some revolutionary new approach soon - but in the mean time, there are ideas that we could and should already be doing, which have already been proven to work, and which are much easier to adopt when you have AI to help.

There are also some ideas that are currently acknowledged as being things we shouldn't do, writing comments for every function is a good example, but the desire to do so persists - there is value in it, and AI can likely help to make it easier, more effective and more palatable.

Things we should already be doing

Small teams

The problem

Brooks's law is an observation about software project management that "Adding manpower to a late software project makes it later." - Mythical Man Month

We see the concepts of small teams over and over, it's a core part of Agile and DevOps, 'Getting real' even advocate a team as small as 3 for version 1.0. That small teams are needed isn't really that controversial - the challenge comes when we add two other factors:

  • Teams should be mostly self-sufficient - Accelerate demonstrates that hand-offs to separate QA teams for testing are bad, are architecture review boards are bad. The entire concept of DevOps rests on the idea that dev and ops hand-offs are bad.
  • We have a lot of job-titles and specialisms - If we need a few cloud engineers for the infrastructure, an architect to design it, a UI/UX designer for the UI design, a few frontend engineers for the frontend, a few backend engineers for the backend, a security person, a business analyst, a product manager, someone to look after delivery, a QA for testing, maybe an automation engineer for the pipelines (and on and on it goes) - how can we ever include all those roles in one team? ### The solution

The logical answer is to make your team a team of generalists.

Small teams need people who can wear different hats. You need designers who can write. You need programmers who understand design. - Getting Real, Signal37

By cross-training and growing engineering skills, generalists can do orders of magnitude more work than their specialist counterparts and it also improves our overall flow of work by removing queues and wait times - The DevOps Handbook - Gene Kim et al

So, the solution is easy - make everyone generalists right? This is as much a cultural challenge as anything else; but from a purely skills-based point of view there's a great opportunity to use AI.

AI as a solution

People might balk at the idea of asking AI to help find security issues, asses design against user personas, look for edge cases when using API libraries, generate automated tests or help write IaC - but by focusing on 'knowing when to ask for help' rather than knowing how to do everything perfectly, it means that you end up with much more efficient teams that are much more likely to focus on the right tasks at the right time.

This is obviously controversial, as is the basic idea of a full-stack engineer, but I would argue with some discipline from the individual engineers to ensure they understand what they've generated, and to get AI to double check things, it's a perfectly safe and very productive way to work.

One thing that AI can't help with though (or maybe I'm just drawing a blank...), is the cultural changes needed to make it happen

Design to Dev hand-offs

The problem

Let's expand on the generalists point, to include not just engineers but other roles in the team. DevOps is heavily influenced by the idea of how difficult handing off from dev to ops is, but design to dev is hardly a smooth transition - especially if weeks went into finalising the design and agreeing it with stakeholders.

The idea

There's already another school of thought on this:

In their book "Getting real" (https://basecamp.com/gettingreal/06.3-from-idea-to-implementation) the 37 Signals team lay out their process for getting to new features:

Paper sketches > Create HTML screens > Code It

Throughout the book, they emphasise the going straight from paper sketches to HTML - a sentiment that is repeated in rework and is evident in their hotwired suite of open source tools.

Getting some HTML, CSS and javascript out to customers for them to try out is a much more true test than 'clickable' prototypes. 37Signals advocate the idea of having devs doing a bit of design, and designers doing bits of html - that can be a bit of a tall order, but...

AI as a solution

With some prompting, designer's don't even need to learn much html/css. Some of my colleagues at Create Future (@angus Allan in particular) have produced some pretty impressive websites and games with just prompting - so, what if we skip the 'high fidelity' designs that get passed over to developers? AI makes it's easy to change too, you can sit with a customer live and modify your page, refresh - "How's that?" - much better to iterate in minutes than in weeks.

Even better, you can take your hand-drawn sketches, upload to chat GPT and ask for it to turn it into a website for you.

Picking the right tech-stack for the problem

The problem

In monolithic architectures and code-bases, we usually have to pick a single tech-stack and accept that it's going to be the best fit for some things, and not for others. If we're lucky, we optimised for the big problems - getting the best tech stack for the problems we face most often; if we're not - we're battling our setup to make it work better

The solution

With a system composed of multiple, collaborating microservices, we can decide to use different technologies inside each one. This allows us to pick the right tool for each job rather than having to select a more standardized, one-size-fits-all approach that often ends up being the lowest common denominator. - Sam Newman, Building Microservices, 2nd Edition

But what happens in reality is that teams tend to pick a particular stack and only work in that. Sometimes the driver is outside the team - it's reasonable to want to make it easy for engineers to move between teams and not have the programming language or tech as a barrier, but often it's from the teams themselves - people just pick the tech they know.

AI as a solution

One of the benefits of micro-services is allowing teams to pick the best tech for their specific problem. Whilst it's difficult to break people out of the comfort of their tech stack - gen AI can at least help with the skills.

If you 'know enough' of a coding language to get things done, AI can help find various issues in you're code, if you don't know much about the programming language's ecosystem you can research various libraries people use, assess your code against best practices, suggest how you might convert from a language you know to one you don't, debug code or explain how you can debug it.

It can also help with picking up other people's projects - all the big providers of AI coding assistants have a /explain option. Probably the biggest thing it can do though, is help teams make the right decision in the first place - outlining your service and its functionality, and getting suggestions from AI about what the best technology to use can take away some of the habitual nature of tech selection that engineers often exhibit.

Challenging existing assumptions

As well as making some of the 'best practices' (or just alternative ways) more do-able, gen AI can start to challenge some of our the existing assumptions and make us more productive by adopting things we currently think of as a bit antiquated now

Don't write comments

Why am I so down on comments? Because they lie. Not always, and not intentionally but too often. The older a comment is, the further it is from the code it described - Clean Code, Uncle Bob

It's difficult to argue with the statement that comments don't get updated and reviewed in the same way as regular code - but one neat feature in intellij AI assistant is to generate doc comments from the function itself. What if every time you saved a file, it checks if your comment is still true and just updates it if not?

This is sort-of useful for most development, it means that you can get some good information about what a function does without reading it - it's really useful for public APIs where you often do that, and it's super useful for documenting UI components and other types of 'interface' with your team.

Functional specs are fantasies

Architectural diagrams, UIs in figma, documentation, threat models, test cases, runbooks and application catalogues - they're all undoubtedly very useful, but the problem is that even the most simple project has multiple versions of the truth. The common approach to this for a long while has been to assume nothing is true, spend onerous amounts of time reconciling it with reality, or trying (valiantly, but ultimately, unsuccessfully) to generate it automatically from code.

We should re-visit these things, we could scan jira tickets, test tickets, videos of demos, designs and look for inaccuracies and update them automatically with gen AI - the real benefit is that the people who need the docs (and they really do sometimes) have them, and people can trust what they're looking at is up-to-date

Putting it all together

So what do you have if you put it all together?

You have a smaller team, capable of doing more tasks within them - your designers upload their hand-drawn sketches to chatGPT and turn it into HTML to test with end-users, they then use documentation automatically generated from components and open API specs to make it functional.

When they're done, the devs can quickly clean it up while the team converts the ideas into tickets without too much hassle.

The devs are programming in languages that are really optimised for the jobs that they are doing, and are living the devops dream - getting AI to help with tests, IaC and all the other benefits of gen AI. They release the code, and it updates architecture diagrams, flags anything that wasn't in the original designs / tickets, highlights any test cases that are missing - and then all the other supporting teams have their docs and user guides ready to roll.

It's probably not the end-state, but it's a good start

Top comments (0)