The official Flutter camera plugin is a useful tool for building camera scanning applications on Android, iOS, and web platforms. However, those who want to use it for Windows may be disappointed. Fortunately, there is a camera_windows plugin currently under development, located in the same repository as the Android, iOS, and web plugins. It's important to note that the Windows plugin is still in development and not yet ready for production use. Additionally, it lacks the most basic feature of image streaming. In this article, I will guide you on how to port the camera plugin to support camera frame callback. By doing so, you will be able to leverage the camera stream to do image processing, such as recognize barcode and QR codes in real-time.
How does Flutter Windows Plugin Capture and Display Camera Frames
Let's dive into the camera Windows plugin to gain a better understanding of how it captures camera frames in C++ and displays them in Flutter. You can find the source code for this plugin at https://github.com/flutter/packages/tree/main/packages/camera/camera_windows.
To develop a camera application in C++ for Windows, you need to utilize the Media Foundation API. The Media Foundation API provides a set of interfaces that you can use to access media services.
The callback function for receiving camera frames
The IMFCaptureEngineOnSampleCallback interface is used to receive data from the capture engine. You can find it in the capture_engine_listener.h
file.
class CaptureEngineListener : public IMFCaptureEngineOnSampleCallback,
public IMFCaptureEngineOnEventCallback {
public:
CaptureEngineListener(CaptureEngineObserver* observer) : observer_(observer) {
assert(observer);
}
~CaptureEngineListener() {}
CaptureEngineListener(const CaptureEngineListener&) = delete;
CaptureEngineListener& operator=(const CaptureEngineListener&) = delete;
STDMETHODIMP_(ULONG) AddRef();
STDMETHODIMP_(ULONG) Release();
STDMETHODIMP_(HRESULT) QueryInterface(const IID& riid, void** ppv);
STDMETHODIMP OnEvent(IMFMediaEvent* pEvent);
STDMETHODIMP_(HRESULT) OnSample(IMFSample* pSample);
private:
CaptureEngineObserver* observer_;
volatile ULONG ref_ = 0;
};
The OnSample
method is called when a sample is ready. It is implemented in the capture_engine_listener.cpp
file. This method allows us to retrieve image data from the IMFSample
object.
HRESULT CaptureEngineListener::OnSample(IMFSample* sample) {
HRESULT hr = S_OK;
if (this->observer_ && sample) {
LONGLONG raw_time_stamp = 0;
sample->GetSampleTime(&raw_time_stamp);
this->observer_->UpdateCaptureTime(
static_cast<uint64_t>(raw_time_stamp / 10));
if (!this->observer_->IsReadyForSample()) {
return hr;
}
ComPtr<IMFMediaBuffer> buffer;
hr = sample->ConvertToContiguousBuffer(&buffer);
// Draw the frame.
if (SUCCEEDED(hr) && buffer) {
DWORD max_length = 0;
DWORD current_length = 0;
uint8_t* data;
if (SUCCEEDED(buffer->Lock(&data, &max_length, ¤t_length))) {
this->observer_->UpdateBuffer(data, current_length);
}
hr = buffer->Unlock();
}
}
return hr;
}
Navigate to the capture_controller.cpp
file to initialize the capture engine listener and the capture engine.
HRESULT CaptureControllerImpl::CreateCaptureEngine() {
assert(!video_device_id_.empty());
HRESULT hr = S_OK;
ComPtr<IMFAttributes> attributes;
if (!capture_engine_) {
ComPtr<IMFCaptureEngineClassFactory> capture_engine_factory;
hr = CoCreateInstance(CLSID_MFCaptureEngineClassFactory, nullptr,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&capture_engine_factory));
if (FAILED(hr)) {
return hr;
}
hr = capture_engine_factory->CreateInstance(CLSID_MFCaptureEngine,
IID_PPV_ARGS(&capture_engine_));
if (FAILED(hr)) {
return hr;
}
}
...
if (!capture_engine_callback_handler_) {
capture_engine_callback_handler_ =
ComPtr<CaptureEngineListener>(new CaptureEngineListener(this));
}
...
hr = capture_engine_->Initialize(capture_engine_callback_handler_.Get(),
attributes.Get(), audio_source_.Get(),
video_source_.Get());
return hr;
}
Next, create a preview sink and pass the CaptureEngineListener
instance to the sink.
HRESULT PreviewHandler::InitPreviewSink(
IMFCaptureEngine* capture_engine, IMFMediaType* base_media_type,
CaptureEngineListener* sample_callback) {
...
ComPtr<IMFCaptureSink> capture_sink;
hr = capture_engine->GetSink(MF_CAPTURE_ENGINE_SINK_TYPE_PREVIEW,
&capture_sink);
if (FAILED(hr)) {
return hr;
}
hr = capture_sink.As(&preview_sink_);
if (FAILED(hr)) {
preview_sink_ = nullptr;
return hr;
}
...
hr = preview_sink_->SetSampleCallback(preview_sink_stream_index,
sample_callback);
if (FAILED(hr)) {
preview_sink_ = nullptr;
return hr;
}
return hr;
}
Finally, start the camera preview to receive camera frames.
HRESULT PreviewHandler::StartPreview(IMFCaptureEngine* capture_engine,
IMFMediaType* base_media_type,
CaptureEngineListener* sample_callback) {
assert(capture_engine);
assert(base_media_type);
HRESULT hr =
InitPreviewSink(capture_engine, base_media_type, sample_callback);
if (FAILED(hr)) {
return hr;
}
preview_state_ = PreviewState::kStarting;
return capture_engine->StartPreview();
}
Using texture to display camera preview
To render the C++ image data, we must make use of flutter::TextureVariant
, FlutterDesktopPixelBuffer
and TextureRegistrar
.
const FlutterDesktopPixelBuffer* TextureHandler::ConvertPixelBufferForFlutter(
size_t target_width, size_t target_height) {
std::unique_lock<std::mutex> buffer_lock(buffer_mutex_);
if (!TextureRegistered()) {
return nullptr;
}
const uint32_t bytes_per_pixel = 4;
const uint32_t pixels_total = preview_frame_width_ * preview_frame_height_;
const uint32_t data_size = pixels_total * bytes_per_pixel;
if (data_size > 0 && source_buffer_.size() == data_size) {
if (dest_buffer_.size() != data_size) {
dest_buffer_.resize(data_size);
}
MFVideoFormatRGB32Pixel* src =
reinterpret_cast<MFVideoFormatRGB32Pixel*>(source_buffer_.data());
FlutterDesktopPixel* dst =
reinterpret_cast<FlutterDesktopPixel*>(dest_buffer_.data());
for (uint32_t y = 0; y < preview_frame_height_; y++) {
for (uint32_t x = 0; x < preview_frame_width_; x++) {
uint32_t sp = (y * preview_frame_width_) + x;
if (mirror_preview_) {
uint32_t tp =
(y * preview_frame_width_) + ((preview_frame_width_ - 1) - x);
dst[tp].r = src[sp].r;
dst[tp].g = src[sp].g;
dst[tp].b = src[sp].b;
dst[tp].a = 255;
} else {
dst[sp].r = src[sp].r;
dst[sp].g = src[sp].g;
dst[sp].b = src[sp].b;
dst[sp].a = 255;
}
}
}
if (!flutter_desktop_pixel_buffer_) {
flutter_desktop_pixel_buffer_ =
std::make_unique<FlutterDesktopPixelBuffer>();
flutter_desktop_pixel_buffer_->release_callback =
[](void* release_context) {
auto mutex = reinterpret_cast<std::mutex*>(release_context);
mutex->unlock();
};
}
flutter_desktop_pixel_buffer_->buffer = dest_buffer_.data();
flutter_desktop_pixel_buffer_->width = preview_frame_width_;
flutter_desktop_pixel_buffer_->height = preview_frame_height_;
flutter_desktop_pixel_buffer_->release_context = buffer_lock.release();
return flutter_desktop_pixel_buffer_.get();
}
return nullptr;
}
int64_t TextureHandler::RegisterTexture() {
if (!texture_registrar_) {
return -1;
}
texture_ =
std::make_unique<flutter::TextureVariant>(flutter::PixelBufferTexture(
[this](size_t width,
size_t height) -> const FlutterDesktopPixelBuffer* {
return this->ConvertPixelBufferForFlutter(width, height);
}));
texture_id_ = texture_registrar_->RegisterTexture(texture_.get());
return texture_id_;
}
The flutter::PixelBufferTexture
function is being called to create a new flutter::TextureVariant
that uses a pixel buffer as its source data. The PixelBufferTexture
function takes a lambda function as its argument, which is called to convert the pixel buffer to a FlutterDesktopPixelBuffer
object that can be used by the Flutter engine. The ConvertPixelBufferForFlutter
function is responsible for creating and returning the FlutterDesktopPixelBuffer
object that is used by the PixelBufferTexture
function to create the flutter::TextureVariant
.
To show camera live stream, we need to keep updating the image data and call MarkTextureFrameAvailable
to notify the Flutter engine to redraw the texture.
void TextureHandler::OnBufferUpdated() {
if (TextureRegistered()) {
texture_registrar_->MarkTextureFrameAvailable(texture_id_);
}
}
bool TextureHandler::UpdateBuffer(uint8_t* data, uint32_t data_length) {
{
const std::lock_guard<std::mutex> lock(buffer_mutex_);
if (!TextureRegistered()) {
return false;
}
if (source_buffer_.size() != data_length) {
source_buffer_.resize(data_length);
}
std::copy(data, data + data_length, source_buffer_.data());
}
OnBufferUpdated();
return true;
};
The ConvertPixelBufferForFlutter
function will be called each time the texture needs to be updated with new source_buffer_
data.
Once a native texture is created, the texture ID is then passed as a camera ID to the Flutter layer.
void CameraImpl::OnCreateCaptureEngineSucceeded(int64_t texture_id) {
// Use texture id as camera id
camera_id_ = texture_id;
auto pending_result =
GetPendingResultByType(PendingResultType::kCreateCamera);
if (pending_result) {
pending_result->Success(EncodableMap(
{{EncodableValue("cameraId"), EncodableValue(texture_id)}}));
}
}
In Flutter app, we can create a Texture
widget with the camera ID to display the camera preview.
@override
Widget buildPreview(int cameraId) {
return Texture(textureId: cameraId);
}
How to Port Flutter Windows Camera Plugin to Support Image Streaming
At present, the callback camera frames are only used for preview display. If we wish to perform image processing on the camera frames, we must implement a new callback function to obtain frame copies.
Let's add a new virtual function OnStreamedFrameAvailable
to the CaptureControllerListener
class.
class CaptureControllerListener {
public:
virtual ~CaptureControllerListener() = default;
...
virtual void OnStreamedFrameAvailable(uint8_t* data, uint32_t data_length) = 0;
};
Then declare the function in camera.h
and implement it in camera.cpp
.
constexpr char kStreamedFrameAvailable[] = "frame_available";
void CameraImpl::OnStreamedFrameAvailable(uint8_t* data, uint32_t data_length) {
if (messenger_ && camera_id_ >= 0) {
auto channel = GetMethodChannel();
std::vector<uint8_t> buffer;
if (buffer.size() != data_length) {
buffer.resize(data_length);
}
std::copy(data, data + data_length, buffer.begin());
std::unique_ptr<EncodableValue> message_data =
std::make_unique<EncodableValue>(EncodableMap(
{{EncodableValue("bytes"), EncodableValue(buffer)}}));
channel->InvokeMethod(kStreamedFrameAvailable,
std::move(message_data));
}
}
Afterwards, add Dart code to process the corresponding method channel call:
class FrameAvailabledEvent extends CameraEvent {
const FrameAvailabledEvent(int cameraId, this.bytes) : super(cameraId);
FrameAvailabledEvent.fromJson(Map<String, dynamic> json)
: bytes = json['bytes'] as Uint8List,
super(json['cameraId'] as int);
final Uint8List bytes;
Map<String, dynamic> toJson() =>
<String, Object?>{'cameraId': cameraId, 'bytes': bytes};
@override
bool operator ==(Object other) =>
identical(this, other) ||
super == other &&
other is FrameAvailabledEvent &&
runtimeType == other.runtimeType &&
bytes == other.bytes;
@override
int get hashCode => Object.hash(super.hashCode, bytes);
}
@visibleForTesting
Future<dynamic> handleCameraMethodCall(MethodCall call, int cameraId) async {
switch (call.method) {
...
case 'frame_available':
final Map<String, Object?> arguments =
(call.arguments as Map<Object?, Object?>).cast<String, Object?>();
Uint8List bytes = arguments['bytes']! as Uint8List;
cameraEventStreamController.add(
FrameAvailabledEvent(cameraId, bytes),
);
break;
default:
throw UnimplementedError();
}
}
To acquire a copy of the camera frames, we register the CaptureControllerListener
with the TextureHandler
object.
void SetCaptureControllerListener(
CaptureControllerListener* capture_controller_listener) {
capture_controller_listener_ = capture_controller_listener;
}
const FlutterDesktopPixelBuffer* TextureHandler::ConvertPixelBufferForFlutter(
size_t target_width, size_t target_height) {
...
if (capture_controller_listener_) {
capture_controller_listener_->OnStreamedFrameAvailable(dest_buffer_.data(), preview_frame_width_ * preview_frame_height_ * 4);
}
flutter_desktop_pixel_buffer_->release_context = buffer_lock.release();
return flutter_desktop_pixel_buffer_.get();
}
After returning the image data from C++ to Dart, you can write following code to process the image data:
void _onFrameAvailable(FrameAvailabledEvent event) {
if (mounted) {
Map<String, dynamic> map = event.toJson();
final Uint8List? data = map['bytes'] as Uint8List?;
// image processing
}
}
StreamSubscription<FrameAvailabledEvent>? _frameAvailableStreamSubscription;
_frameAvailableStreamSubscription?.cancel();
_frameAvailableStreamSubscription =
(CameraPlatform.instance as CameraWindows)
.onFrameAvailable(cameraId)
.listen(_onFrameAvailable);
Note: To ensure that the main thread of a Flutter application remains responsive and does not become blocked, it is important to move heavy computations to a separate worker thread. This can be achieved by utilizing either a Dart isolate or a native thread that is implemented in platform-specific code.
Steps to Build Windows Desktop Barcode QR Scanner in Flutter
-
Add the following dependencies to the
pubspec.yaml
file of your flutter project.
flutter_barcode_sdk: ^2.2.2 camera_windows: git: url: https://github.com/yushulx/flutter_camera_windows.git
-
Initialize the barcode reader with a trial license of Dynamsoft Barcode Reader SDK.
Future<void> initBarcodeSDK() async { _barcodeReader = FlutterBarcodeSdk(); await _barcodeReader.setLicense( 'DLS2eyJoYW5kc2hha2VDb2RlIjoiMjAwMDAxLTE2NDk4Mjk3OTI2MzUiLCJvcmdhbml6YXRpb25JRCI6IjIwMDAwMSIsInNlc3Npb25QYXNzd29yZCI6IndTcGR6Vm05WDJrcEQ5YUoifQ=='); await _barcodeReader.init(); await _barcodeReader.setBarcodeFormats(BarcodeFormat.ALL); int ret = await _barcodeReader.setParameters(Template.balanced); print('Set parameters: $ret'); }
-
Get the camera list and open a camera.
List<CameraDescription> _cameras = <CameraDescription>[]; String _selectedItem = ''; final List<String> _cameraNames = []; List<BarcodeResult>? _results; Size? _previewSize; int _cameraId = -1; bool _initialized = false; StreamSubscription<CameraErrorEvent>? _errorStreamSubscription; StreamSubscription<CameraClosingEvent>? _cameraClosingStreamSubscription; StreamSubscription<FrameAvailabledEvent>? _frameAvailableStreamSubscription; bool _isScanAvailable = true; ResolutionPreset _resolutionPreset = ResolutionPreset.veryHigh; bool _loading = true; Future<void> initCamera() async { List<CameraDescription> cameras = <CameraDescription>[]; try { _cameras = await CameraPlatform.instance.availableCameras(); _cameraNames.clear(); for (CameraDescription description in _cameras) { _cameraNames.add(description.name); } _selectedItem = _cameraNames[0]; } on PlatformException catch (e) {} toggleCamera(0); setState(() { _loading = false; }); } Future<void> toggleCamera(int index) async { assert(!_initialized); if (_cameras.isEmpty) { return; } int cameraId = -1; try { final CameraDescription camera = _cameras[index]; cameraId = await CameraPlatform.instance.createCamera( camera, _resolutionPreset, ); _errorStreamSubscription?.cancel(); _errorStreamSubscription = CameraPlatform.instance .onCameraError(cameraId) .listen(_onCameraError); _cameraClosingStreamSubscription?.cancel(); _cameraClosingStreamSubscription = CameraPlatform.instance .onCameraClosing(cameraId) .listen(_onCameraClosing); _frameAvailableStreamSubscription?.cancel(); _frameAvailableStreamSubscription = (CameraPlatform.instance as CameraWindows) .onFrameAvailable(cameraId) .listen(_onFrameAvailable); final Future<CameraInitializedEvent> initialized = CameraPlatform.instance.onCameraInitialized(cameraId).first; await CameraPlatform.instance.initializeCamera( cameraId, ); final CameraInitializedEvent event = await initialized; _previewSize = Size( event.previewWidth, event.previewHeight, ); if (mounted) { setState(() { _initialized = true; _cameraId = cameraId; }); } } on CameraException catch (e) { try { if (cameraId >= 0) { await CameraPlatform.instance.dispose(cameraId); } } on CameraException catch (e) { debugPrint('Failed to dispose camera: ${e.code}: ${e.description}'); } // Reset state. if (mounted) { setState(() { _initialized = false; _cameraId = -1; _previewSize = null; }); } } }
-
Invoke
decodeImageBuffer
method to read barcode and QR code from the camera preview image. ThedecodeImageBuffer
method is an asynchronous method implemented using C++ thread, so you need to set a flag to ensure that the method is not called repeatedly.
void _onFrameAvailable(FrameAvailabledEvent event) { if (mounted) { Map<String, dynamic> map = event.toJson(); final Uint8List? data = map['bytes'] as Uint8List?; if (data != null) { if (!_isScanAvailable) { return; } _isScanAvailable = false; _barcodeReader .decodeImageBuffer( data, _previewSize!.width.toInt(), _previewSize!.height.toInt(), _previewSize!.width.toInt() * 4, ImagePixelFormat.IPF_ARGB_8888.index) .then((results) { _results = results; setState(() {}); _isScanAvailable = true; }).catchError((error) { _isScanAvailable = true; }); } } }
-
Build the app layout:
Widget _buildPreview() { return CameraPlatform.instance.buildPreview(_cameraId); } @override Widget build(BuildContext context) { return WillPopScope( onWillPop: () async { return true; }, child: Scaffold( appBar: AppBar( title: const Text('Scanner'), ), body: Center( child: Stack( children: <Widget>[ SizedBox( width: MediaQuery.of(context).size.width, height: MediaQuery.of(context).size.height - MediaQuery.of(context).padding.top, child: FittedBox( fit: BoxFit.contain, child: Stack( children: [ _cameraId < 0 ? Image.asset( 'images/default.png', ) : SizedBox( width: _previewSize == null ? 640 : _previewSize!.width, height: _previewSize == null ? 480 : _previewSize!.height, child: _buildPreview()), Positioned( top: 0.0, right: 0.0, bottom: 0.0, left: 0.0, child: _results == null || _results!.isEmpty ? Container( color: Colors.black.withOpacity(0.1), child: const Center( child: Text( 'No barcode detected', style: TextStyle( color: Colors.white, fontSize: 20.0, fontWeight: FontWeight.bold, ), ), )) : createOverlay(_results!), ), ], ), )), ... ], ), ), )); }
-
Run the Windows desktop QR code scanner app:
flutter run -d windows
The full code is available on GitHub.
Known Issue
Don't use the barcode SDK plugin in isolate, otherwise you will encounter the following error:
[ERROR:flutter/runtime/dart_isolate.cc(1098)] Unhandled exception:
Binding has not yet been initialized.
The "instance" getter on the ServicesBinding binding mixin is only available once that binding has been initialized.
Typically, this is done by calling "WidgetsFlutterBinding.ensureInitialized()" or "runApp()" (the latter calls the former). Typically this call is done in the "void main()" method. The "ensureInitialized" method is idempotent; calling it multiple times is not harmful. After calling that method, the "instance" getter will return the binding.
In a test, one can call "TestWidgetsFlutterBinding.ensureInitialized()" as the first line in the test's "main()" method to initialize the binding.
If ServicesBinding is a custom binding mixin, there must also be a custom binding class, like WidgetsFlutterBinding, but that mixes in the selected binding, and that is the class that must be constructed before using the "instance" getter.
#0 BindingBase.checkInstance.<anonymous closure> (package:flutter/src/foundation/binding.dart:284:9)
#1 BindingBase.checkInstance (package:flutter/src/foundation/binding.dart:366:6)
#2 ServicesBinding.instance (package:flutter/src/services/binding.dart:54:54)
#3 MethodChannel.binaryMessenger (package:flutter/src/services/platform_channel.dart:254:45)
#4 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:289:15)
#5 MethodChannel.invokeMethod (package:flutter/src/services/platform_channel.dart:472:12)
#6 FlutterBarcodeSdk.getParameters (package:flutter_barcode_sdk/flutter_barcode_sdk.dart:97:27)
#7 decodeTask (package:camera_windows_example/main.dart:50:41)
<asynchronous suspension>
Top comments (0)