Comprehend Medical detects and returns useful information in unstructured clinical text.
For example, if your doctor is taking notes, or if you have discharge summaries, or test results, or case notes, then it will use NLP.
So it will use natural language processing to detect all the protected health information within the document itself as well with a DetectPHI API.
So from an architecture perspective you would store whatever documents you have in Amazon S3, and then you would invoke the Comprehend Medical API.
Or you could have Kinesis Data Firehose, and then analyze that in real time.
Or you can use Amazon Transcribe to first transcribe the voice into text.
Okay, as we understand from before, and then once it's in text form we can pass it to the Amazon Comprehend Medical service.
Top comments (0)