I have split my development workflow into three phases and will explain each of them in detail:
Development
This is the primary and most important phase for any developer. This can be further categorized into bugs and features, but for simplicity, let us stick to a common development workflow.
AI needs proper, enough, and precise context to produce the best results. In our case, we must provide the feature specifications or the bug details (possibly repro and collect logs) and feed these details to the assistant. Brownie points if you could attach the relevant functions instead of providing the entire codebase for fruitful results.
Primarily for feature implementation, if you could share any reference (ideally similar flow in the codebase) that would be the best for your assistant to refer to instead of hallucinating across its implementation.
Sharing an example prompt that I used for a small feature implementation:
Add count with filtering operations to the QdrantDocumentStore
`count_documents_by_filter`: count documents matching a filter
`get_metadata_fields_info`: get metadata field names and their types
`get_metadata_field_min_max`: get min/max values for numeric/date fields
`count_unique_metadata_by_filter`: count unique values per metadata field with filtering
`get_metadata_field_unique_values`: get paginated unique values for a metadata field <<<<< Detailed explanation about each function
Both sync and async versions. Also, add integration tests for all new operations (sync and async) <<<<< Testing
Check `class WeaviateDocumentStore()` for reference <<<<<< Provide sample reference
One interesting thing I have encountered is regarding code formatting and static type checking. Whatever model you choose, the output delivered will be in the format the model has been trained. Hence, the solution would be to provide the pyproject.toml, which has ruff, lint, static-type checking options or definitions.
Use the following directions to format the code:
[tool.hatch.envs.default.scripts]
[tool.hatch.envs.test.scripts]
[tool.ruff.lint]
Best practice is to use this prompt after code generation, so that you preserve context and also allow the model to focus more on logic rather than cosmetic changes.
Documentation
This phase is the easiest and can save a ton of your time if utilized properly. Instead of writing from scratch, you can ask the assistant to generate:
- Docstrings
- API documentation
- Usage Example
- Release Notes
Write a changelog entry for this feature.
Feature: metadata filtering operations in QdrantDocumentStore
Include:
- summary
- new APIs added
- backward compatibility notes
- sample minimal usage
This approach ensures that documentation stays consistent, structured, and updated alongside code changes. Most importantly, you can generate the documentation for the older code base, which is a golden asset, and not just for the newer/updated code.
Testing
Testing is another niche area where AI assistants excel and push the limits of the code. Instead of manually writing test suites/cases, you can ask the assistant to generate:
- Unit tests
- Integration tests
- Edge case
- Mock APIs
- Sync/Async-based testing
Sample prompt:
Write test cases for the APIs:
count_documents_by_filter
get_metadata_fields_info
get_metadata_field_min_max
count_unique_metadata_by_filter
get_metadata_field_unique_values
- cover both sync and async versions
- include realistic metadata examples
- validate correct filtering behavior
Using AI for testing ensures:
- better code coverage even before hitting the codecov tools
- faster test case generation
- fewer overlooked edge cases
Sharing an example session that I used while adding support for a small feature that shows the request-response between the assistant and me:
Final thoughts
Always remember, you are the reviewer of the response from the assistant. You can depend on AI and shouldn't become dependent on AI.
Top comments (0)