DEV Community

Cover image for AI Coding Companions: My Experiences in 2023
Stephen Sennett for AWS Community Builders

Posted on

AI Coding Companions: My Experiences in 2023

One moment, it's like having your thoughts magically transcribed into code right before your eyes. The next, it's like a monkey with a typewriter belting out the whimsical rhymes of Dr. Seuss. Welcome to the wild and surprising world of AI Coding Companion tools!

Over the last year, I've been using Amazon CodeWhisperer, GitHub Copilot, and ChatGPT in various ways to make my hacky coding easier and more effective. With that, I've condensed my experiences into some key reflections that have definitely made my life easier.

βœ… Writing Boilerplate Code

One of the things AI solutions excel at can best be described as "making the boring stuff go faster". We're already familiar with this, but instead of going through the vendor docs or Stack Overflow, we can skip a few steps.

This is great for things like API calls (like writing out an S3 Object Upload call), writing out some logic flows, and a lot of the proforma that isn't necessarily challenging, but just takes time.

Python code written in visual studio showing a PutItem call being made to S3 suggested by CodeWhisperer in Autocomplete

These suggestions can include errors which may not be immediately obvious. That isn't much of a problem when you're familiar with the topic, but when you're not...

❌ Handling Unfamiliar Concepts

Nobody is an expert at everything, but when writing code, you should definitely have some understanding of what you're trying to accomplish. AI tools let us shortcut this a bit, but it also magnifies the problems.

Recently I needed to build something with Python and FFmpeg. And while FFmpeg is amazingly powerful, it can also be incredibly complicated.

Attempt at using CodeWhisperer to generate an action with FFmpeg

Trying to troubleshoot these errors was next-to impossible without knowing more about how FFmpeg actually worked, so I ended up going back and writing everything by hand. That said, even this process can be made easier.

🧠 Troubleshooting Code Issues

Writing code can be hard, and especially when dealing with new languages, you can just get things wrong. And surprisingly, one of the best tools to help with this has been ChatGPT.

Because of the extensive capabilities of the GPT4 model and the gargantuan corpus that powers it, ChatGPT is actually able to help pinpoint why some chunks of code didn't work as expected. Case in point, it was very helpful when I was trying to get some Flutter/Dart code working in an experience I posted about on LinkedIn.

Screenshot of a ChatGPT session asking why some code was not working, a snippet of some Dart code, and the insightful response returned by ChatGPT

Uploading a code snippet for a personal side project to ChatGPT isn't much of an issue, but it does speak to a lot of concerns that people have around one very major relevant topic...

πŸ” Security Implications

Uploading highly sensitive code into an unknown API that is given full access to your hard work is unnerving, and does come with risks. Especially in the corporate context.

Amazon CodeWhisperer, GitHub Copilot, and ChatGPT can all be configured to prevent it from learning from your inputs (check the links for more info). And while you’re probably already trusting these organizations with storing your data anyway (especially AWS and GitHub/Microsoft), but the risks of it being given to other customers is where the danger lies.

Checkbox in the Amazon CodeWhisperer settings in Visual Studio Code that prevents it from learning from your code

Whether your organization deems the use of AI coding tools acceptable depends on a lot of factors (legislative, compliance, regulatory, etc.), but if there's PII or commercially sensitive information, I'd err on the side of caution.

πŸ–Š Documenting Code

This was a surprising little gimmick. While we can use comments as placeholders for natural language to prompt the tool to generate code, we can also do the same in reverse too.

By starting a comment above a block of text, you can prompt the model to write a description of what the function is doing within the context of the file.

Example showing how an autocomplete suggestion can document a function with a comment based on existing code

This... is actually pretty cool. It's not infallible, and so far they can only read within the context of the current file. But I could see this being useful in navigating an unfamiliar codebase.

πŸ˜’ Fighting the UI

Integration with the IDE makes using some of these tools incredibly easy, but not always for the better. The autocomplete suggestions can be remarkably quick while writing code. Sometimes too quick.

Coding often requires a focused, uninterrupted thought process, and interruptions by autocomplete suggestions can disrupt that. Likewise, occasionally the multi-line suggestions will cause unexpected behaviour when navigating or manipulating the code you've already written.

Simpsons meme of Lenny stating

These tools are great, but often it can be useful just to hit the Pause button for a while. Though I hope future updates can make these experiences more seamless.

Conclusion

Generative AI supported development is here to stay. It's an inescapable fact, and will continue to change the nature of the industry. It's not perfect, and can create its own headaches, but it's also very much worth exploring.

If you're tried some of these tools, share your thoughts in the comments! If not, I encourage you to give them a try.

Top comments (8)

Collapse
 
dorneanu profile image
Victor Dorneanu

I've used Github Copilot for a while but I'm not yet convinced to pay that much for a service I'd use only once in a while. Tabnine for example is still OK for me as it suggests contextual code. It's free and available for most IDEs.

I'm looking forward to Code Llama and how well it integrates with my IDE (Emacs).

Collapse
 
ssennettau profile image
Stephen Sennett

Fair enough too - I've found CodeWhisperer being free for individual use to be a deciding factor for me keeping that, and dropping Copilot. The differentiation was just meh πŸ€·β€β™‚οΈ

That's cool - I didn't realize Code Llama had an Emacs version in the works! Definitely makes a difference bringing the tools to where you'd actually use them, rather than having to change your process.

Collapse
 
dorneanu profile image
Victor Dorneanu

That's cool - I didn't realize Code Llama had an Emacs version in the works!

This is the client: github.com/kurnevsky/llama-cpp.el

Definitely makes a difference bringing the tools to where you'd actually use them, rather than having to change your process.

Indeed.

Collapse
 
dashapetr profile image
Darya Petrashka

Thanks for your post, Stephen!
I did noticed as well that some AI outputs could be unpredictable 😳 It was kind of fun especially when I tried a 'fix bug' brush from Copilot labs. A piece of code could change entirely πŸ˜†

Collapse
 
ssennettau profile image
Stephen Sennett

Oh wow - I hadn't tried that feature... Wonder how it works - trying to solve a bug simply from the code without having the output would be a fruitless endeavour. Unless it also reads the output from the execution too, as well as the code? Interesting...

Sometimes it definitely gets easier just to hit "Pause" and doing it ye olde fashioned way!

Collapse
 
dashapetr profile image
Darya Petrashka

I doubt that it evaluates the output, such ridiculous suggestions it can make. 😳

Collapse
 
danielrive profile image
Daniel German Rivera

Good post, I have been testing AWS CodeWhispererer(for Golang), and that sometimes adds more complexity., I am not a developer, but just using the AI tool will not provide you with everything, so you need basic logic or knowledge to code.
CodeWhispererer has been good with AWS SDK but maybe not as some people think, that the AI tool will do almost everything for you.
Also, I have been thinking about the security implications and whether all the companies agree to use this kind of tool.

Collapse
 
ssennettau profile image
Stephen Sennett

That's a big part I think a lot of people miss - you can't rely on the tool to do everything. At the end of the day, it's just an LLM that's stringing together "things that sound right". Pretty good for some things, atrocious for others.

I've actually found the opposite with the AWS SDK. Most people I've talked to assume that's all it can do, and to be fair even AWS mostly use it in their own examples, but I'm pretty surprised how well it does with other libraries not having anything to do with AWS. In fact, that was a finding I maybe should have discussed more. In context, most of my work is Python, and it wouldn't surprise me that the experience might vary by language, since it's a totally different corpus.

The security implications are 100% a problem for enterprise adoption, both for the case of training for public use on corporate data, but also corporate data being sent externally to be used in inference. Like most LLM technologies, private endpoints will make a difference, but that's gonna be a matter of time and money to whether there's a market for it.

Appreciate your thoughts.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.