Fast, native React Native document scanner for iOS and Android using Apple VisionKit (iOS) and Google ML Kit (Android). Features automatic document detection, edge/perspective correction, multiβpage scanning, configurable image quality, optional Base64, and support for the React Native New Architecture (Fabric/TurboModules on iOS).
- iOS: Uses VisionKit framework and VNDocumentCameraViewController
- Android: Uses ML Kit Document Scanner API
| iOS Demo | Android Demo |
|---|---|
![]() |
![]() |
- π± Cross-platform support (iOS 13+ and Android API 21+)
- π iOS: Full support for new React Native architecture (Fabric/TurboModules)
- πΈ Automatic document detection and scanning
- πΌοΈ Multi-page document scanning
- βοΈ Configurable image quality
- π¦ Optional base64 encoding
- π― Platform parity - same API for both platforms
Keywords: React Native document scanner, VisionKit document scanner, ML Kit document scanner, scan documents React Native, edge detection, perspective correction, multiβpage scanner
npm install @dariyd/react-native-document-scanneror with yarn:
yarn add @dariyd/react-native-document-scannernpm install https://github.com/dariyd/react-native-document-scanner.gitor
yarn add https://github.com/dariyd/react-native-document-scanner.gitcd ios && pod installNo additional steps required. The ML Kit dependency will be automatically included.
Add the NSCameraUsageDescription key to your Info.plist:
<key>NSCameraUsageDescription</key>
<string>We need access to your camera to scan documents</string>Add camera permission to your AndroidManifest.xml:
<uses-permission android:name="android.permission.CAMERA" />The module automatically requests camera permission when launching the scanner.
This module requires React Native 0.77.3 or higher and supports the new architecture on iOS, while using the stable old architecture on Android.
iOS: Full support for Fabric and TurboModules - automatically detected and enabled when you enable new architecture in your project.
Android: Uses the stable bridge implementation for maximum compatibility. New architecture support is planned for a future release.
- React Native 0.77.3 or higher
- React 18.2.0 or higher
- iOS 13.0 or higher
- Android:
- Minimum SDK: API 21 (Android 5.0)
- Target SDK: API 35 (Android 15) - required by Google Play Store
- Compile SDK: API 35
β
iOS: Fully supported - Set RCT_NEW_ARCH_ENABLED=1 in your Podfile or build settings
- The module is implemented as a Java-only TurboModule on Android, which requires additional C++ bridging setup
- Keep
newArchEnabled=falsein yourgradle.propertiesfor now - The module works perfectly with the old architecture on Android
The iOS implementation will automatically use Fabric/TurboModules when enabled, while Android will continue to use the stable bridge implementation.
import { launchScanner } from 'react-native-document-scanner';
// Basic usage
const result = await launchScanner();
// With options
const result = await launchScanner({
quality: 0.8,
includeBase64: false,
});
// With callback (optional)
launchScanner({ quality: 0.9 }, (result) => {
if (result.didCancel) {
console.log('User cancelled');
} else if (result.error) {
console.log('Error:', result.errorMessage);
} else {
console.log('Scanned images:', result.images);
}
});import {launchScanner} from 'react-native-document-scanner';Launch scanner to scan documents.
See Options for further information on options.
The callback will be called with a response object, refer to The Response Object.
| Option | iOS | Android | Description |
|---|---|---|---|
| quality | β | β | Number between 0 and 1 for image quality (default: 1). Lower values reduce file size |
| includeBase64 | β | β | If true, creates base64 string of the image (Avoid using on large image files due to performance) |
| key | iOS | Android | Description |
|---|---|---|---|
| didCancel | β | β | true if the user cancelled the process |
| error | β | β | true if error happens |
| errorMessage | β | β | Description of the error, use it for debug purpose only |
| images | β | β | Array of the selected media, refer to Image Object |
| key | iOS | Android | Description |
|---|---|---|---|
| base64 | β | β | The base64 string of the image (if includeBase64 is true) |
| uri | β | β | The file uri in app specific cache storage |
| width | β | β | Image width in pixels |
| height | β | β | Image height in pixels |
| fileSize | β | β | The file size in bytes |
| type | β | β | The file MIME type (e.g., "image/jpeg") |
| fileName | β | β | The file name |
While both platforms provide similar functionality, there are some minor differences:
- Uses native VisionKit framework
- Requires iOS 13.0 or higher
- Supports PNG format for quality = 1.0, JPEG for quality < 1.0
- Uses Google ML Kit Document Scanner
- Minimum SDK: API level 21 (Android 5.0)
- Target SDK: API level 35 (Android 15) - Google Play Store requirement
- Always outputs JPEG format
- Requires Google Play Services
If you encounter issues with ML Kit on Android, ensure that:
- Google Play Services is installed on the device/emulator
- Your
compileSdkVersionis 35 or higher - Your
targetSdkVersionis 35 (required by Google Play Store) - Your
minSdkVersionis 21 or higher
Ensure you've added the NSCameraUsageDescription key to your Info.plist.
Check the example/ directory for a complete example app demonstrating the scanner.
- iOS implementation: react-native-image-picker
- Android ML Kit: Google ML Kit Document Scanner
MIT

