DEV Community

Cover image for Auto-instrumentation: No-code approach to instrument Python
Ashok Nagaraj
Ashok Nagaraj

Posted on

2

Auto-instrumentation: No-code approach to instrument Python

In the previous post we saw instrumenting a python application with OTel emitting metrics and traces along the way. While this is good, it can still be intimidating to an instrumentation-newbie.
To get over this barrier and to allow a quick-start, some languages have auto-instrumentation support. One only needs to install some libraries to get going.

Our sample web-app

import datetime
import flask

######################
## initialization
######################
app = flask.Flask(__name__)
start = datetime.datetime.now()

######################
## routes
######################
@app.route('/', methods=['GET'])
def root():
  return flask.jsonify({'message': 'flask app root/'})

@app.route('/healthz', methods=['GET'])
def healthz():
  now = datetime.datetime.now()
  return flask.jsonify({'message': f'up and running since {(now - start)}'})

######################
if __name__ == '__main__':
######################
  app.run(debug=True, host='0.0.0.0', port=5000)

Enter fullscreen mode Exit fullscreen mode

Install necessary libraries

$ pip install flask
$ pip install opentelemetry-distro opentelemetry-instrumentation-flask
Enter fullscreen mode Exit fullscreen mode

Note: some corner case made my install opentelemetry-api as well, but is not needed as per official documentation.

Initialize (to install extra libraries as needed)
$ opentelemetry-bootstrap -a install

Run your code

$ opentelemetry-instrument --traces_exporter console --metrics_exporter console flask run
 * Serving Flask app 'app'
 * Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5000
 * Running on http://192.168.1.7:5000
Press CTRL+C to quit
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 121-673-590
127.0.0.1 - - [17/Oct/2022 07:21:05] "GET /healthz HTTP/1.1" 200 -
{
    "name": "/healthz",
    "context": {
        "trace_id": "0xd0850752865577d2d8cd11aaef169574",
        "span_id": "0x29c8ad5fd974de41",
        "trace_state": "[]"
    },
    "kind": "SpanKind.SERVER",
    "parent_id": null,
    "start_time": "2022-10-17T01:52:45.522806Z",
    "end_time": "2022-10-17T01:52:45.523615Z",
    "status": {
        "status_code": "UNSET"
    },
    "attributes": {
        "http.method": "GET",
        "http.server_name": "127.0.0.1",
        "http.scheme": "http",
        "net.host.port": 5000,
        "http.host": "localhost:5000",
        "http.target": "/healthz",
        "net.peer.ip": "127.0.0.1",
        "http.user_agent": "curl/7.79.1",
        "net.peer.port": 55838,
        "http.flavor": "1.1",
        "http.route": "/healthz",
        "http.status_code": 200
    },
    "events": [],
    "links": [],
    "resource": {
        "attributes": {
            "telemetry.sdk.language": "python",
            "telemetry.sdk.name": "opentelemetry",
            "telemetry.sdk.version": "1.13.0",
            "telemetry.auto.version": "0.34b0",
            "service.name": "unknown_service"
        },
        "schema_url": ""
    }
}
{"resource_metrics": [{"resource": {"attributes": {"telemetry.sdk.language": "python", "telemetry.sdk.name": "opentelemetry", "telemetry.sdk.version": "1.13.0", "telemetry.auto.version": "0.34b0", "service.name": "unknown_service"}, "schema_url": ""}, "scope_metrics": [], "schema_url": ""}]}

Enter fullscreen mode Exit fullscreen mode

As of this post, most popular frameworks like Django, FastAPI, Flask have instrumentation libraries for HTTP context propagation

Code size implications

Auto-instrumentation does add some extra libraries. This was the result in my case

$ du -sh manual/venv/ auto/venv/
 29M    manual/venv/
 30M    auto/venv/
Enter fullscreen mode Exit fullscreen mode

Always refer to the official documentation which is up-to-date
Official documentation

Image of Timescale

🚀 pgai Vectorizer: SQLAlchemy and LiteLLM Make Vector Search Simple

We built pgai Vectorizer to simplify embedding management for AI applications—without needing a separate database or complex infrastructure. Since launch, developers have created over 3,000 vectorizers on Timescale Cloud, with many more self-hosted.

Read more

Top comments (0)

Image of Docusign

🛠️ Bring your solution into Docusign. Reach over 1.6M customers.

Docusign is now extensible. Overcome challenges with disconnected products and inaccessible data by bringing your solutions into Docusign and publishing to 1.6M customers in the App Center.

Learn more