<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Amit Makhija</title>
    <description>The latest articles on DEV Community by Amit Makhija (@amit-makhija30).</description>
    <link>https://dev.to/amit-makhija30</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/amit-makhija30"/>
    <language>en</language>
    <item>
      <title>How to Integrate the Dual Camera Video Recording Feature in your iOS App?</title>
      <dc:creator>Amit Makhija</dc:creator>
      <pubDate>Fri, 20 Sep 2019 10:58:51 +0000</pubDate>
      <link>https://dev.to/amitspaceo/how-to-integrate-the-dual-camera-video-recording-feature-in-your-ios-app-1li3</link>
      <guid>https://dev.to/amitspaceo/how-to-integrate-the-dual-camera-video-recording-feature-in-your-ios-app-1li3</guid>
      <description>&lt;p&gt;&lt;em&gt;This blog is written with the help of Hitesh Trivedi, who has over 9 years of experience in the iOS app development. He has guided to develop over 100 iPhone apps with unique features and functionalities. He has special expertise in Swift and Objective-C.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Apple announced a new version of OS (iOS13), updates (for instance Siri – Voice Experiences ), technologies and features during the Apple Worldwide Developers Conference (WWDC), 2019. After getting a few queries from our clients about integrating dual recording in their iPhone apps, we decided to write this blog. It is a complete tutorial on how to integrate this feature seamlessly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you will learn in this tutorial&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to integrate the dual recording feature in an iOS app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are an iOS app development company that is always keen on learning and implementing the latest features. Our iOS developers are always experimenting with features to bring out the best in our developed apps. In this tutorial, we have explained how to integrate the multi-camera video recording feature in an iPhone app.&lt;/p&gt;

&lt;p&gt;It’s been a little while since the WWDC happened in June this year. Just like every year, Apple made a lot of exciting announcements. A few of these announcements include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;iOS 13&lt;/li&gt;
&lt;li&gt;iPad OS&lt;/li&gt;
&lt;li&gt;Mac OS Catalina&lt;/li&gt;
&lt;li&gt;TVOS 13&lt;/li&gt;
&lt;li&gt;Watch OS 6&lt;/li&gt;
&lt;li&gt;New MAC Pro and XDR DisplayiOS 13 has generated a lot of buzz recently, and rightly so. It is equipped with a lot of new or updated features and functionalities. Let us see some of the new features of it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What’s new in iOS 13?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.spaceotechnologies.com/dark-mode-ios-13-tutorial/" rel="noopener noreferrer"&gt;Dark Mode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Camera Capture&lt;/li&gt;
&lt;li&gt;Swipe Keyboard&lt;/li&gt;
&lt;li&gt;Revamped Reminder App&lt;/li&gt;
&lt;li&gt;ARKit 3 (AR Quick Look, People Occlusion, Motion Capture, Reality Composer, track multiple faces)&lt;/li&gt;
&lt;li&gt;Siri (Voice Experiences. Shortcuts app)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.spaceotechnologies.com/sign-in-with-apple-ios-tutorial/" rel="noopener noreferrer"&gt;Sign In with Apple&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Multiple UI Instances&lt;/li&gt;
&lt;li&gt;Camera Capture(Record a video  front/back simultaneously)&lt;/li&gt;
&lt;li&gt;Pencil kit&lt;/li&gt;
&lt;li&gt;Combine framework&lt;/li&gt;
&lt;li&gt;Core Data – Sync Core Data to Cloud kit&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.spaceotechnologies.com/ios-background-task-framework-app-update/" rel="noopener noreferrer"&gt;Background Tasks&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this iOS tutorial, we will be talking about the Camera Capture functionality where we have used AVMultiCamPiP to capture and record from multiple cameras (front and back).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-camera recording using Camera Capture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before iOS 13, Apple did not allow to record videos using the front back cameras simultaneously. Now, the users can perform dual camera recording using the back and front camera at the same time. This is done by Camera Capture&lt;/p&gt;

&lt;p&gt;PiP in AVMultiCamPiP stands for ‘picture in picture’. This functionality helps to view the video output in full screen and second as the small screen. At any time in between, the user can change the focus. &lt;/p&gt;

&lt;p&gt;We created a project called “SODualCamera”, aiming to demonstrate iOS 13 Camera Capture feature on XCode 11. Let’s see the &lt;a href="https://www.spaceotechnologies.com/integrate-multi-camera-video-recording-ios/" rel="noopener noreferrer"&gt;step by step guide to integrate this feature&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps to integrate the multi-camera recording feature in an iOS app&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1) Create a new project using XCODE 11&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fjlqoiobik824rjy4cyf1.png" alt="Create a new project"&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2) Select “Single View App” in the iOS section and enter the project name. We have kept it as ‘SODualCamera’.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fnlb5wk7ja43jeoe0w422.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fnlb5wk7ja43jeoe0w422.png" alt="SelectSingle View App"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F0ga9b0fm0my7npnr5071.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F0ga9b0fm0my7npnr5071.png" alt="Enter project name"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) Go to Project Name folder and Open Main.storyboard file. Add Stackview as shown in the figure. In StackView Add two UIView with labels inside UIview.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F2k1e5odxm9pcywqa3e3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F2k1e5odxm9pcywqa3e3k.png" alt="Add Stackview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4) Go to Xcode File Menu and select New and Select File and Select Swift files as shown figure and click the Next button.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F4slf5hvh7lazkue7mjwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F4slf5hvh7lazkue7mjwa.png" alt="Select Swift file from menu"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5) Now Enter file name as ViewPreview.swift.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F1cwhffk7t2t1u4m65mum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F1cwhffk7t2t1u4m65mum.png" alt="Enter file name"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6) Now open ViewPreview.swift and add code as below&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
import AVFoundation
 
class ViewPreview: UIView {
    var videoPreviewLayer: AVCaptureVideoPreviewLayer {
        guard let layer = layer as? AVCaptureVideoPreviewLayer else {
            fatalError("Expected `AVCaptureVideoPreviewLayer` type for layer. Check PreviewView.layerClass implementation.")
        }
        
        layer.videoGravity = .resizeAspect
        return layer
    }
    
    override class var layerClass: AnyClass {
        return AVCaptureVideoPreviewLayer.self  
    }
}
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;7) Go to Main.storyboard and select StackView. Under StackView, select UIView and add custom class as “ViewPreview” as shown in the figure.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fraitc9pg7a25xyleiotx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fraitc9pg7a25xyleiotx.png" alt="select stackview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Repeat the same process for the second view.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8) Create outlets for both of View. We are going to use this view as a Preview of camera output.&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
@IBOutlet weak var backPreview: ViewPreview!
@IBOutlet weak var frontPreview: ViewPreview!
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;9) As we are aiming to capture video, we need to import the AVFoundation Framework.&lt;/strong&gt;&lt;br&gt;
We are saving output video to the user’s photo library and we also need to import the Photos framework in ViewController.swift&lt;/p&gt;

&lt;pre&gt;
import AVFoundation
import Photos
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;10) Declare a variable to perform dual video session.&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt; var dualVideoSession = AVCaptureMultiCamSession() &lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;11) Create an object of AVCaptureMultiCamSession to execute camera session.&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt; var audioDeviceInput: AVCaptureDeviceInput? &lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;12) For Audio Device input to record audio while running dual video session, you need to declare variables as follows:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
 var backDeviceInput:AVCaptureDeviceInput?
 var backVideoDataOutput = AVCaptureVideoDataOutput()
 var backViewLayer:AVCaptureVideoPreviewLayer?
 var backAudioDataOutput = AVCaptureAudioDataOutput()

 var frontDeviceInput:AVCaptureDeviceInput? 
 var frontVideoDataOutput = AVCaptureVideoDataOutput()
 var frontViewLayer:AVCaptureVideoPreviewLayer?
 var frontAudioDataOutput = AVCaptureAudioDataOutput()
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;13) In ViewDidAppear add the following code to detect if the app is running on Simulator.&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)
        
        #if targetEnvironment(simulator)
          let alertController = UIAlertController(title: "SODualCamera", message: "Please run on physical device", preferredStyle: .alert)
          alertController.addAction(UIAlertAction(title: "OK",style: .cancel, handler: nil))
          self.present(alertController, animated: true, completion: nil)
          return
        #endif
    }
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;14) Create a Setup method to configure the video sessions and also manage user permission. If an app is running on Simulator then return immediately.&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
#if targetEnvironment(simulator)
            return
        #endif
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;15) Now check for video recording permission using this code:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
switch AVCaptureDevice.authorizationStatus(for: .video) {
            case .authorized:
                // The user has previously granted access to the camera.
                 configureDualVideo()
                break
                
            case .notDetermined:
                
                AVCaptureDevice.requestAccess(for: .video, completionHandler: { granted in
                    if granted{
                        self.configureDualVideo()
                    }
                })
                
                break
                
            default:
                // The user has previously denied access.
            DispatchQueue.main.async {
                let changePrivacySetting = "Device doesn't have permission to use the camera, please change privacy settings"
                let message = NSLocalizedString(changePrivacySetting, comment: "Alert message when the user has denied access to the camera")
                let alertController = UIAlertController(title: "Error", message: message, preferredStyle: .alert)
                
                alertController.addAction(UIAlertAction(title: "OK", style: .cancel, handler: nil))
                
                alertController.addAction(UIAlertAction(title: "Settings", style: .`default`,handler: { _ in
                    if let settingsURL = URL(string: UIApplication.openSettingsURLString) {
                        UIApplication.shared.open(settingsURL,  options: [:], completionHandler: nil)
                    }
                }))
                
                self.present(alertController, animated: true, completion: nil)
            }
        }
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;16) After getting the user permission to record video, we configure the video session parameters.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, we need to check whether the device supports MultiCam session&lt;/p&gt;

&lt;pre&gt;
if !AVCaptureMultiCamSession.isMultiCamSupported{
            DispatchQueue.main.async {
               let alertController = UIAlertController(title: "Error", message: "Device is not supporting multicam feature", preferredStyle: .alert)
               alertController.addAction(UIAlertAction(title: "OK",style: .cancel, handler: nil))
               self.present(alertController, animated: true, completion: nil)
            }
            return
        }
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;17) Now, we set up the front camera first.&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
func setUpBackCamera() -&amp;gt; Bool{
        //start configuring dual video session
        dualVideoSession.beginConfiguration()
            defer {
                //save configuration setting
                dualVideoSession.commitConfiguration()
            }
 
                
            //search back camera
            guard let backCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) else {
                print("no back camera")
                return false
            }
            
            // append back camera input to dual video session
            do {
                backDeviceInput = try AVCaptureDeviceInput(device: backCamera)
                
                guard let backInput = backDeviceInput,dualVideoSession.canAddInput(backInput) else {
                    print("no back camera device input")
                    return false
                }
                dualVideoSession.addInputWithNoConnections(backInput)
            } catch {
                print("no back camera device input: \(error)")
                return false
            }
            
            // seach back video port
            guard let backDeviceInput = backDeviceInput,
                let backVideoPort = backDeviceInput.ports(for: .video, sourceDeviceType: backCamera.deviceType, sourceDevicePosition: backCamera.position).first else {
                print("no back camera input's video port")
                return false
            }
            
            // append back video output
            guard dualVideoSession.canAddOutput(backVideoDataOutput) else {
                print("no back camera output")
                return false
            }
            dualVideoSession.addOutputWithNoConnections(backVideoDataOutput)
            backVideoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
            backVideoDataOutput.setSampleBufferDelegate(self, queue: dualVideoSessionOutputQueue)
            
            // connect back output to dual video connection
            let backOutputConnection = AVCaptureConnection(inputPorts: [backVideoPort], output: backVideoDataOutput)
            guard dualVideoSession.canAddConnection(backOutputConnection) else {
                print("no connection to the back camera video data output")
                return false
            }
            dualVideoSession.addConnection(backOutputConnection)
            backOutputConnection.videoOrientation = .portrait
 
            // connect back input to back layer
            guard let backLayer = backViewLayer else {
                return false
            }
            let backConnection = AVCaptureConnection(inputPort: backVideoPort, videoPreviewLayer: backLayer)
            guard dualVideoSession.canAddConnection(backConnection) else {
                print("no a connection to the back camera video preview layer")
                return false
            }
            dualVideoSession.addConnection(backConnection)
        
        return true
    }
&lt;/pre&gt;

&lt;p&gt;We have now successfully configured the front camera for a video session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;18) We need to follow the same process for the back camera setup.&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
func setUpBackCamera() -&amp;gt; Bool{
        //start configuring dual video session
        dualVideoSession.beginConfiguration()
            defer {
                //save configuration setting
                dualVideoSession.commitConfiguration()
            }
                
            //search back camera
            guard let backCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) else {
                print("no back camera")
                return false
            }
            
            // append back camera input to dual video session
            do {
                backDeviceInput = try AVCaptureDeviceInput(device: backCamera)
                
                guard let backInput = backDeviceInput,dualVideoSession.canAddInput(backInput) else {
                    print("no back camera device input")
                    return false
                }
                dualVideoSession.addInputWithNoConnections(backInput)
            } catch {
                print("no back camera device input: \(error)")
                return false
            }
            
            // search back video port
            guard let backDeviceInput = backDeviceInput,
                let backVideoPort = backDeviceInput.ports(for: .video, sourceDeviceType: backCamera.deviceType, sourceDevicePosition: backCamera.position).first else {
                print("no back camera input's video port")
                return false
            }
            
            // append back video output
            guard dualVideoSession.canAddOutput(backVideoDataOutput) else {
                print("no back camera output")
                return false
            }
            dualVideoSession.addOutputWithNoConnections(backVideoDataOutput)
            backVideoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
            backVideoDataOutput.setSampleBufferDelegate(self, queue: dualVideoSessionOutputQueue)
            
            // connect back output to dual video connection
            let backOutputConnection = AVCaptureConnection(inputPorts: [backVideoPort], output: backVideoDataOutput)
            guard dualVideoSession.canAddConnection(backOutputConnection) else {
                print("no connection to the back camera video data output")
                return false
            }
            dualVideoSession.addConnection(backOutputConnection)
            backOutputConnection.videoOrientation = .portrait
 
            // connect back input to back layer
            guard let backLayer = backViewLayer else {
                return false
            }
            let backConnection = AVCaptureConnection(inputPort: backVideoPort, videoPreviewLayer: backLayer)
            guard dualVideoSession.canAddConnection(backConnection) else {
                print("no connection to the back camera video preview layer")
                return false
            }
            dualVideoSession.addConnection(backConnection)
        
        return true
    }
    
        
    func setUpFrontCamera() -&amp;gt; Bool{
            
              //start configuring dual video session
            dualVideoSession.beginConfiguration()
            defer {
              //save configuration setting
                dualVideoSession.commitConfiguration()
            }
            
            //search front camera for dual video session
            guard let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front) else {
                print("no front camera")
                return false
            }
            
            // append front camera input to dual video session
            do {
                frontDeviceInput = try AVCaptureDeviceInput(device: frontCamera)
                
                guard let frontInput = frontDeviceInput, dualVideoSession.canAddInput(frontInput) else {
                    print("no front camera input")
                    return false
                }
                dualVideoSession.addInputWithNoConnections(frontInput)
            } catch {
                print("no front input: \(error)")
                return false
            }
            
            // search front video port for dual video session
            guard let frontDeviceInput = frontDeviceInput,
                let frontVideoPort = frontDeviceInput.ports(for: .video, sourceDeviceType: frontCamera.deviceType, sourceDevicePosition: frontCamera.position).first else {
                print("no front camera device input's video port")
                return false
            }
            
            // append front video output to dual video session
            guard dualVideoSession.canAddOutput(frontVideoDataOutput) else {
                print("no the front camera video output")
                return false
            }
            dualVideoSession.addOutputWithNoConnections(frontVideoDataOutput)
            frontVideoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
            frontVideoDataOutput.setSampleBufferDelegate(self, queue: dualVideoSessionOutputQueue)
            
            // connect front output to dual video session
            let frontOutputConnection = AVCaptureConnection(inputPorts: [frontVideoPort], output: frontVideoDataOutput)
            guard dualVideoSession.canAddConnection(frontOutputConnection) else {
                print("no connection to the front video output")
                return false
            }
            dualVideoSession.addConnection(frontOutputConnection)
            frontOutputConnection.videoOrientation = .portrait
            frontOutputConnection.automaticallyAdjustsVideoMirroring = false
            frontOutputConnection.isVideoMirrored = true
 
            // connect front input to front layer
            guard let frontLayer = frontViewLayer else {
                return false
            }
            let frontLayerConnection = AVCaptureConnection(inputPort: frontVideoPort, videoPreviewLayer: frontLayer)
            guard dualVideoSession.canAddConnection(frontLayerConnection) else {
                print("no connection to front layer")
                return false
            }
            dualVideoSession.addConnection(frontLayerConnection)
            frontLayerConnection.automaticallyAdjustsVideoMirroring = false
            frontLayerConnection.isVideoMirrored = true
            
            return true
    }
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;19) After Setting up Front and Back Camera, we need to configure for Audio.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, we need to find the Audio Device Input and then add it to Session. Then find Audio port for front and back camera and add that port to front audio output and video output respectively.&lt;/p&gt;

&lt;p&gt;Code looks like this:&lt;/p&gt;

&lt;pre&gt;
func setUpAudio() -&amp;gt; Bool{
         //start configuring dual video session
        dualVideoSession.beginConfiguration()
        defer {
            //save configuration setting
 
            dualVideoSession.commitConfiguration()
        }
        
        // search audio device for dual video session
        guard let audioDevice = AVCaptureDevice.default(for: .audio) else {
            print("no the microphone")
            return false
        }
        
        // append audio to dual video session
        do {
            audioDeviceInput = try AVCaptureDeviceInput(device: audioDevice)
            
            guard let audioInput = audioDeviceInput,
                dualVideoSession.canAddInput(audioInput) else {
                    print("no audio input")
                    return false
            }
            dualVideoSession.addInputWithNoConnections(audioInput)
        } catch {
            print("no audio input: \(error)")
            return false
        }
        
        //search audio port back
        guard let audioInputPort = audioDeviceInput,
            let backAudioPort = audioInputPort.ports(for: .audio, sourceDeviceType: audioDevice.deviceType, sourceDevicePosition: .back).first else {
            print("no front back port")
            return false
        }
        
        // search audio port front
        guard let frontAudioPort = audioInputPort.ports(for: .audio, sourceDeviceType: audioDevice.deviceType, sourceDevicePosition: .front).first else {
            print("no front audio port")
            return false
        }
        
        // append back output to dual video session
        guard dualVideoSession.canAddOutput(backAudioDataOutput) else {
            print("no back audio data output")
            return false
        }
        dualVideoSession.addOutputWithNoConnections(backAudioDataOutput)
        backAudioDataOutput.setSampleBufferDelegate(self, queue: dualVideoSessionOutputQueue)
        
        // append front output to dual video session
        guard dualVideoSession.canAddOutput(frontAudioDataOutput) else {
            print("no front audio data output")
            return false
        }
        dualVideoSession.addOutputWithNoConnections(frontAudioDataOutput)
        frontAudioDataOutput.setSampleBufferDelegate(self, queue: dualVideoSessionOutputQueue)
        
        // add back output to dual video session
        let backOutputConnection = AVCaptureConnection(inputPorts: [backAudioPort], output: backAudioDataOutput)
        guard dualVideoSession.canAddConnection(backOutputConnection) else {
            print("no back audio connection")
            return false
        }
        dualVideoSession.addConnection(backOutputConnection)
        
        // add front output to dual video session
        let frontutputConnection = AVCaptureConnection(inputPorts: [frontAudioPort], output: frontAudioDataOutput)
        guard dualVideoSession.canAddConnection(frontutputConnection) else {
            print("no front audio connection")
            return false
        }
        dualVideoSession.addConnection(frontutputConnection)
        
        return true
    }
&lt;/pre&gt;

&lt;p&gt;Now we have successfully configured Front, Back Camera, and Audio.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;20) Now when we start a session, it will send output in CMSampleBuffer. We have to collect this sample buffer output and perform the required operation on it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For this, we need to set the sample delegate method for Camera and Audio. We have already set up delegate for Front, Back Camera and Audio using the following code:&lt;/p&gt;

&lt;pre&gt;
frontVideoDataOutput.setSampleBufferDelegate(self, queue: dualVideoSessionOutputQueue)
        
  backVideoDataOutput.setSampleBufferDelegate(self, queue: dualVideoSessionOutputQueue)

   backAudioDataOutput.setSampleBufferDelegate(self, queue: dualVideoSessionOutputQueue)
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;21) To handle runtime error for a session, we need to add observer as follows:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
NotificationCenter.default.addObserver(self, selector: #selector(sessionRuntimeError), name: .AVCaptureSessionRuntimeError,object: dualVideoSession)
        
        NotificationCenter.default.addObserver(self, selector: #selector(sessionWasInterrupted), name: .AVCaptureSessionWasInterrupted, object: dualVideoSession)
        
        NotificationCenter.default.addObserver(self, selector: #selector(sessionInterruptionEnded), name: .AVCaptureSessionInterruptionEnded, object: dualVideoSession)

&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;22) So now we are ready to launch the session. We are going to use a different queue for the session as when we run the session on the main thread, it causes performance issues, lagging, memory leaking.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Start the session using this code:&lt;/p&gt;

&lt;pre&gt;
dualVideoSessionQueue.async {
          self.dualVideoSession.startRunning()
      }
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;23) Now, we will Add Gesture recognizer for starting and stopping the recording in the setup method as follows:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
func addGestures(){
   
        let tapSingle = UITapGestureRecognizer(target: self, action: #selector(self.handleSingleTap(_:)))
        tapSingle.numberOfTapsRequired = 1
        self.view.addGestureRecognizer(tapSingle)
        
 
        let tapDouble = UITapGestureRecognizer(target: self, action: #selector(self.handleDoubleTap(_:)))
        tapDouble.numberOfTapsRequired = 2
        self.view.addGestureRecognizer(tapDouble)
        
        tapSingle.require(toFail: tapDouble) 
//differentiate single tap and double tap recognition if the user do both gestures simultaneously. 
 
}
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;24) In Apple Capture demo, they have used AVCapture Delegate method to fetch front and back video.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here, we are not going to use this Sample Output Delegate method. We are going to use ReplayKit to record it. We will do screen recording to generate output video here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;25) Import Replaykit framework as follows:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt; import ReplayKit &lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;26) Create one shared object of RPScreenRecorder, to start and stop screen recording.&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt; let screenRecorder = RPScreenRecorder.shared() &lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;27) We will track recording status using isRecording variable.&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt; var isRecording = false &lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;28) When the user taps on the screen once, it starts screen recording and when user double taps on screen it stops screen recording and saves output video to the user’s photo library.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have used one custom class for appending video and audio output to one video output using AssetWriter.swift&lt;/p&gt;

&lt;pre&gt;
import UIKit
import Foundation
import AVFoundation
import ReplayKit
import Photos
 
extension UIApplication {
 
    class func getTopViewController(base: UIViewController? = UIApplication.shared.keyWindow?.rootViewController) -&amp;gt; UIViewController? {
 
        if let nav = base as? UINavigationController {
            return getTopViewController(base: nav.visibleViewController)
 
        } else if let tab = base as? UITabBarController, let selected = tab.selectedViewController {
            return getTopViewController(base: selected)
 
        } else if let presented = base?.presentedViewController {
            return getTopViewController(base: presented)
        }
        return base
    }
}
 
class AssetWriter {
    private var assetWriter: AVAssetWriter?
    private var videoInput: AVAssetWriterInput?
    private var audioInput: AVAssetWriterInput?
    private let fileName: String
    
    let writeQueue = DispatchQueue(label: "writeQueue")
    
    init(fileName: String) {
        self.fileName = fileName
    }
    
    private var videoDirectoryPath: String {
        let dir = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
        return dir + "/Videos"
    }
    
    private var filePath: String {
        return videoDirectoryPath + "/\(fileName)"
    }
    
    private func setupWriter(buffer: CMSampleBuffer) {
        if FileManager.default.fileExists(atPath: videoDirectoryPath) {
            do {
                try FileManager.default.removeItem(atPath: videoDirectoryPath)
            } catch {
                print("fail to removeItem")
            }
        }
        do {
            try FileManager.default.createDirectory(atPath: videoDirectoryPath, withIntermediateDirectories: true, attributes: nil)
        } catch {
            print("fail to createDirectory")
        }
        
        self.assetWriter = try? AVAssetWriter(outputURL: URL(fileURLWithPath: filePath), fileType: AVFileType.mov)
        
        let writerOutputSettings = [
            AVVideoCodecKey: AVVideoCodecType.h264,
            AVVideoWidthKey: UIScreen.main.bounds.width,
            AVVideoHeightKey: UIScreen.main.bounds.height,
            ] as [String : Any]
        
        self.videoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: writerOutputSettings)
        self.videoInput?.expectsMediaDataInRealTime = true
        
        guard let format = CMSampleBufferGetFormatDescription(buffer),
            let stream = CMAudioFormatDescriptionGetStreamBasicDescription(format) else {
                print("fail to setup audioInput")
                return
        }
        
        let audioOutputSettings = [
            AVFormatIDKey : kAudioFormatMPEG4AAC,
            AVNumberOfChannelsKey : stream.pointee.mChannelsPerFrame,
            AVSampleRateKey : stream.pointee.mSampleRate,
            AVEncoderBitRateKey : 64000
            ] as [String : Any]
        
        self.audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: audioOutputSettings)
        self.audioInput?.expectsMediaDataInRealTime = true
        
        if let videoInput = self.videoInput, (self.assetWriter?.canAdd(videoInput))! {
            self.assetWriter?.add(videoInput)
        }
        
        if  let audioInput = self.audioInput, (self.assetWriter?.canAdd(audioInput))! {
            self.assetWriter?.add(audioInput)
        }
    }
    
    public func write(buffer: CMSampleBuffer, bufferType: RPSampleBufferType) {
        writeQueue.sync {
            if assetWriter == nil {
                if bufferType == .audioApp {
                    setupWriter(buffer: buffer)
                }
            }
            
            if assetWriter == nil {
                return
            }
            
            if self.assetWriter?.status == .unknown {
                print("Start writing")
                let startTime = CMSampleBufferGetPresentationTimeStamp(buffer)
                self.assetWriter?.startWriting()
                self.assetWriter?.startSession(atSourceTime: startTime)
            }
            if self.assetWriter?.status == .failed {
                print("assetWriter status: failed error: \(String(describing: self.assetWriter?.error))")
                return
            }
            
            if CMSampleBufferDataIsReady(buffer) == true {
                if bufferType == .video {
                    if let videoInput = self.videoInput, videoInput.isReadyForMoreMediaData {
                        videoInput.append(buffer)
                    }
                } else if bufferType == .audioApp {
                    if let audioInput = self.audioInput, audioInput.isReadyForMoreMediaData {
                        audioInput.append(buffer)
                    }
                }
            }
        }
    }
    
    public func finishWriting() {
        writeQueue.sync {
            self.assetWriter?.finishWriting(completionHandler: {
                print("finishWriting")
                PHPhotoLibrary.shared().performChanges({
                    PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: URL(fileURLWithPath: self.filePath))
                }) { saved, error in
                    if saved {
                        let alertController = UIAlertController(title: "Your video was successfully saved", message: nil, preferredStyle: .alert)
                        let defaultAction = UIAlertAction(title: "OK", style: .default, handler: nil)
                        alertController.addAction(defaultAction)
                        if let topVC = UIApplication.getTopViewController() {
                            topVC.present(alertController, animated: true, completion: nil)
                        }
                    }
                }
            })
        }
    }
}
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;29) Now we can go to the final stage of our demo.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the user taps on screen we need to start the screen recording.&lt;/p&gt;

&lt;p&gt;Create Start Capture function as following:&lt;/p&gt;

&lt;pre&gt;
func startCapture() {
       screenRecorder.startCapture(handler: { (buffer, bufferType, err) in
            self.isRecording = true
            self.assetWriter!.write(buffer: buffer, bufferType: bufferType)
        }, completionHandler: {
            if let error = $0 {
                print(error)
            }
        })
    }
&lt;/pre&gt;

&lt;p&gt;Method of Screen recorder will return as buffer and buffer type, which we need to pass to the assetwriter. AssetWriter will write this buffer into the URL, which we need to pass to assetwriter initially.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;30) We have created an object of AssetWriter assetWrite and initialized as follows:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
let outputFileName = NSUUID().uuidString + ".mp4"
   
assetWriter = AssetWriter(fileName: outputFileName) 
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;31) Now when the user double-taps on the screen, screen recording stops. So we will use stopCapture of screen recorder method as following:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;
func stopCapture() {
        screenRecorder.stopCapture {
            self.isRecording = false
            if let err = $0 {
                print(err)
            }
            self.assetWriter?.finishWriting()
        }
    }
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;32) As we are not writing any buffer into the assetWriter, we need to tell assetWriter to finish writing and generate output video URL.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now AssetWriter will finish writing and it will ask the user for permission to store video in the user photo library and if the user approves permission it will be saved there.&lt;/p&gt;

&lt;p&gt;AssetWriter has finishWritng function for doing this.&lt;/p&gt;

&lt;pre&gt;
public func finishWriting() {
        writeQueue.sync {
            self.assetWriter?.finishWriting(completionHandler: {
                print("finishWriting")
                PHPhotoLibrary.shared().performChanges({
                    PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: URL(fileURLWithPath: self.filePath))
                }) { saved, error in
                    if saved {
                        let alertController = UIAlertController(title: "Your video was successfully saved", message: nil, preferredStyle: .alert)
                        let defaultAction = UIAlertAction(title: "OK", style: .default, handler: nil)
                        alertController.addAction(defaultAction)
                        if let topVC = UIApplication.getTopViewController() {
                            topVC.present(alertController, animated: true, completion: nil)
                        }
                    }
                }
            })
        }
    }
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Note: Some developers might get confused that RPscreenRecorder also provides startRecording and stopRecording but we have used startCapture and stopCapture.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Why is it so?&lt;/p&gt;

&lt;p&gt;Here’s an image to answer it&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fpt59wf844o9n75w3cspg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fpt59wf844o9n75w3cspg.png" alt="Recording"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;startRecording will start recording the app display. When we use startRecording method, to end recording we need to call stopRecroding method of RPscreenRecorder.&lt;/p&gt;

&lt;p&gt;When we are running an AVSession and want to record screen, at that time we need to use startCapture method.&lt;/p&gt;

&lt;p&gt;startCapture will start screen and audio capture. The recording initiated with startCapture must end with stopCapture.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summing up
&lt;/h1&gt;

&lt;p&gt;We hope that you found this iPhone tutorial useful and your concepts about iPhone dual camera recording are clear. You can find the source code of this record front and back camera same time iPhone illustration on &lt;a href="https://github.com/spaceotech/SODualCamera" rel="noopener noreferrer"&gt;Github&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ios13</category>
      <category>iostutorial</category>
      <category>dualcamerarecording</category>
    </item>
    <item>
      <title>iOS Tutorial: How to customize dark mode in iOS app with iOS 13</title>
      <dc:creator>Amit Makhija</dc:creator>
      <pubDate>Tue, 30 Jul 2019 11:16:22 +0000</pubDate>
      <link>https://dev.to/amit-makhija30/ios-tutorial-how-to-customize-dark-mode-in-ios-app-with-ios-13-57a8</link>
      <guid>https://dev.to/amit-makhija30/ios-tutorial-how-to-customize-dark-mode-in-ios-app-with-ios-13-57a8</guid>
      <description>&lt;p&gt;&lt;em&gt;Apple Worldwide Developers Conference (WWDC), 2019 brought a lot of new features, technologies and a new version of iOS (iOS 13) as well. It was organized from 3rd to 7th June 2019. During this developers’ conference, it was announced that Dark Mode will be available on iOS 13 and later versions.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you will learn in this blog:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What iOS Dark Mode is and what it does&lt;/li&gt;
&lt;li&gt;The benefits of using Dark Mode&lt;/li&gt;
&lt;li&gt;Important considerations while implementing Dark Mode in an app&lt;/li&gt;
&lt;li&gt;Customizing the color scheme for your iOS application for different themes – light and dark mode.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is iOS Dark Mode? What does it do?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Dark Mode is a dark system-wide appearance that uses a darker color palette for all screens, views, menus, and controls. Basically, like Android Dark Mode, it changes a bright theme to a darker one. It also maintains vibrancy and contrast to make foreground content stand out against a darker background.&lt;/p&gt;

&lt;h1&gt;
  
  
  Benefits of Dark Mode
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;It is most helpful when the surrounding is dark. This puts less strain on eyes as compared to light mode. &lt;/li&gt;
&lt;li&gt;You can choose Dark Mode as your default interface style.&lt;/li&gt;
&lt;li&gt;You can also use Settings to make your device switch to Dark Mode automatically when ambient light is low.&lt;/li&gt;
&lt;li&gt;Dark Mode supports all accessibility features.&lt;/li&gt;
&lt;li&gt;Using Dark Mode on OLED iPhone models like iPhone X, XS, XS Max may also improve battery life. This is because black pixels in an OLED panel actually switch off and conserve battery power.&lt;/li&gt;
&lt;li&gt;Bonus: Dark Mode looks really cool&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As announced by Apple during the conference, Apple’s preinstalled apps like Notes, News, TV, Music, Reminders, Mail, and others will already have it.&lt;/p&gt;

&lt;p&gt;However, developers will be able to use Apple’s new tools to add dark mode to their apps. Being an iPhone app development company, we had a lot of our clients wanting to integrate this feature to their iOS apps. &lt;/p&gt;

&lt;p&gt;Our proficient &lt;a href="https://www.spaceotechnologies.com/hire-iphone-developer/"&gt;iPhone app developers&lt;/a&gt; decided to help other developers by explaining the feature with a simple example. Before we see how to add this feature, let us know which are the important considerations that one should keep in mind during feature integration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Considerations for &lt;a href="https://www.spaceotechnologies.com/dark-mode-ios-13-tutorial/"&gt;iPhone Dark Mode&lt;/a&gt; while implementing in an app&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Put the focus on content:&lt;/strong&gt; Make sure that content stands out and surrounding UI recedes into the background. The purpose of the Dark Mode is to put the focus on the content areas of the interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test for both-light and dark appearances:&lt;/strong&gt; Check how the interface looks in both appearances. You may need to adjust designs to accommodate each one as designs working well in one appearance might not work in another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test with changes in contrast and transparency:&lt;/strong&gt; You must test the content when Increase Contrast and Reduce Transparency is turned on. Make sure that content is distinct in Dark Mode when you adjust contrast and transparency settings.&lt;/p&gt;

&lt;p&gt;Now that you are clear about considerations, let’s customize the color scheme for your iOS application for different themes.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to customize the color scheme for your iOS application for different themes – light and dark mode.
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create a new project in Xcode&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Rduu0bL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/7vdf8jb1z0537q83dwce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Rduu0bL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/7vdf8jb1z0537q83dwce.png" alt="Create new project" width="622" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Add a new color set in assets.xcassets folder as shown in the images&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2AEFWkV5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/y527gofru79e006us9gx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2AEFWkV5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/y527gofru79e006us9gx.png" alt="add a new color set" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1MmBnIey--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/vgdi68pzn9yu8ysapz0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1MmBnIey--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/vgdi68pzn9yu8ysapz0r.png" alt="New Color Set" width="499" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cPXOkYWW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/mig0u781clqlghdcjooi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cPXOkYWW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/mig0u781clqlghdcjooi.png" alt="Colorset Properties" width="258" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eco8CNYD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/c4gsalfnwtq006g5l0fo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eco8CNYD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/c4gsalfnwtq006g5l0fo.png" alt="Select any color" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Go to Storyboard and drag &amp;amp; drop controls that you would like to add to your project.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nlu38mQz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/79znq9aiauxs5jal7omc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nlu38mQz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/79znq9aiauxs5jal7omc.png" alt="Goto storyboard" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Go to the view controller and create outlet(s) for UI Controls.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ovX0Wsm_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/tsxt8gicla2srhru1mow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ovX0Wsm_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/tsxt8gicla2srhru1mow.png" alt="create outlets" width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Create employee struct and create an array of employee struct&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--anDwvY3V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/nnbeefh660fma9j3fyfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--anDwvY3V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/nnbeefh660fma9j3fyfo.png" alt="create an array of employee struct" width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Append the data in arrEmployee to bind with UITableView&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mVB-8Gha--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/zvcxixt104wuu4b8yv3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mVB-8Gha--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/zvcxixt104wuu4b8yv3d.png" alt="Append the data in arrEmployee" width="688" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Assign data source for the table view&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aTGsrLZv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/ghiub37ta89zm4f3ts5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aTGsrLZv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/ghiub37ta89zm4f3ts5a.png" alt="Assign data source for the table view" width="800" height="73"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Create a UITableViewCell class for employee&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dvzzpWWx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/chu8r5ovn60qyt02jkdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dvzzpWWx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/chu8r5ovn60qyt02jkdd.png" alt="Create a UITableViewCell" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9: Implement the UITableView data source method&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zlHTPZEx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/78ykqez42mcnrzpjoxna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zlHTPZEx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/78ykqez42mcnrzpjoxna.png" alt="Implement the UITableView data source" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10: Create switch event to set theme ( light or dark ) for UILable&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VDppsDcS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/05g9wtyqhzd8t1kfr4dl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VDppsDcS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/05g9wtyqhzd8t1kfr4dl.png" alt="Create switch event" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 11: Set default behavior for controls in viewdidload&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i-mQlKR---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/4x5gniun1zf43ddqug69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i-mQlKR---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/4x5gniun1zf43ddqug69.png" alt="Set default behavior" width="800" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 12: compare assets folder with below images&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xhc-qEuh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/4jxpxqgj0tz3n1i20uf8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xhc-qEuh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/4jxpxqgj0tz3n1i20uf8.png" alt="compare assets folder with images" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--97H0egU5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/a7rrh8kwf0rhg7nq0io0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--97H0egU5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/a7rrh8kwf0rhg7nq0io0.png" alt="compare assets folder with images" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s all folks!&lt;/p&gt;

&lt;h1&gt;
  
  
  Outcome
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qEwuMMjq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/q96ruxk9p042l7w2hpm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qEwuMMjq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/q96ruxk9p042l7w2hpm2.png" alt="Emp Details - lite mode" width="800" height="1731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ls3b-bwI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/hxkonoh38p6k4ybezaub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ls3b-bwI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/hxkonoh38p6k4ybezaub.png" alt="Emp Details - dark mode" width="800" height="1731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Concluding notes
&lt;/h1&gt;

&lt;p&gt;We hope that you found this iPhone tutorial useful and your concepts about iPhone Dark Mode are clear. You can find the source code of this iOS 13 Dark Mode illustration on &lt;a href="https://github.com/spaceotech/SODarkModeDemo"&gt;github&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here, in this iPhone app tutorial, we learned about customizing the iOS app theme for both ios dark theme and light.&lt;/p&gt;

&lt;p&gt;If you have any suggestions or queries in this tutorial or any questions regarding &lt;a href="https://www.spaceotechnologies.com/iphone-app-development/"&gt;iPhone app development&lt;/a&gt;, we are all ears.&lt;/p&gt;

</description>
      <category>iphoneappdevelopment</category>
      <category>darkmode</category>
      <category>iostutorial</category>
      <category>ios13</category>
    </item>
    <item>
      <title>How to use BackgroundTasks framework to keep your iOS app content up to date?</title>
      <dc:creator>Amit Makhija</dc:creator>
      <pubDate>Fri, 19 Jul 2019 10:06:53 +0000</pubDate>
      <link>https://dev.to/amit-makhija30/how-to-use-backgroundtasks-framework-to-keep-your-ios-app-content-up-to-date-4he6</link>
      <guid>https://dev.to/amit-makhija30/how-to-use-backgroundtasks-framework-to-keep-your-ios-app-content-up-to-date-4he6</guid>
      <description>&lt;p&gt;&lt;em&gt;In this technical blog, we are going to understand how to schedule iOS background tasks. This iOS tutorial is for those who want to implement the background task scheduler in their latest iOS app development project.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you will learn?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In this iOS tutorial, you’re going to see an example of using the new BackgroundTasks framework for fetching images in the background, while the phone is in idle mode.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Last month at WWDC Developer Conference 2019, Apple released the latest iOS 13 with a huge list of new features and functionalities. It was the second major and one of the most important revisions of an Operating System which is being used on more than 1 Billion iOS devices worldwide.&lt;/p&gt;

&lt;p&gt;The most significant thing that Apple has brought is the Optimized Trend, which was launched with iOS 12, now making iOS 13 faster &amp;amp; more efficient. The app updating time has been improved, while the app launching time has become 2x quicker. Also, the app download size has been reduced to half (50%).&lt;/p&gt;

&lt;p&gt;Let’s talk about some new features and functionalities of iOS 13 and then move on to understand one of the most important functionalities- &lt;strong&gt;BackgroundTasks framework.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s a list of some of the important features brought in iOS 13:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dark Mode&lt;/li&gt;
&lt;li&gt;Revamped Photos app&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.spaceotechnologies.com/reduce-android-app-size-tutorial/" rel="noopener noreferrer"&gt;Sign In with Apple&lt;/a&gt; &lt;strong&gt;(which we already covered in detail with illustration)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;HomeKit Secure Video&lt;/li&gt;
&lt;li&gt;Name and image in Messages&lt;/li&gt;
&lt;li&gt;Swiping keyboard&lt;/li&gt;
&lt;li&gt;Multi-user HomePod&lt;/li&gt;
&lt;li&gt;All-new Reminders app&lt;/li&gt;
&lt;li&gt;Memoji and stickers&lt;/li&gt;
&lt;li&gt;Smarter, smoother Siri voice assistance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In terms of technical functionalities, Apple has introduced:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Advances in contextual action menus in iOS, macOS and iPadOS&lt;/li&gt;
&lt;li&gt;UIWindowScene API and multitasking in iPadOS&lt;/li&gt;
&lt;li&gt;AI/ML functionalities like Image and Speech saliency, Word embeddings, Sound analysis, Text catalog and recognition, Image similarity &amp;amp; classification, On-device speech, Face capture quality, Sentiment classification&lt;/li&gt;
&lt;li&gt;Conversational shortcuts in Siri for apps&lt;/li&gt;
&lt;li&gt;New BackgroundTasks Framework&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this iOS app tutorial, we will talk about the &lt;strong&gt;“BackgroundTasks Framework”.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BackgroundTasks Framework&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This new framework is used for tasks like cleaning a database, updating a machine learning model, or updating the displayed data for an app, and other deferrable tasks that are better done in the background. It makes efficient use of processing time and power, and run tasks like these when the device is in idle condition.&lt;/p&gt;

&lt;p&gt;BackgroundTasks Framework has two main task requests under BGTaskScheduler:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;BGAppRefreshTaskRequest:&lt;/strong&gt; This is a request to launch an app in the background to execute a short refresh task.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;BGProcessingTaskRequest:&lt;/strong&gt; This is a request to launch an app in the background and execute a process that takes a longer time to complete.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;BackgroundTasks can be used to perform various activities like database cleaning, uploading pictures to a server, syncing pictures in other devices, and many more.&lt;/p&gt;

&lt;p&gt;Our expert &lt;a href="https://www.spaceotechnologies.com/hire-iphone-developer/" rel="noopener noreferrer"&gt;iOS developers&lt;/a&gt; at Space-O Technologies received a lot of queries from our clients and other developers when BackgroundTasks Framework was introduced. Our developers were happy to help them and decided to come up with a small iOS demo about iOS background image fetching, to explain a few things.&lt;/p&gt;

&lt;p&gt;In this iOS tutorial, we are going to take the iOS background task example of fetching the latest count of added images in the image gallery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fetching Image Count While Processing Task In Background&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Implementing BackgroundTasks Framework in your project&lt;/p&gt;

&lt;p&gt;1) Create a new project using XCODE 11.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FN9ELrpj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/g5if3gz7m61v9geqrr3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FN9ELrpj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/g5if3gz7m61v9geqrr3v.png" alt="Create new project" width="559" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2) Select “Single View App” in the iOS section and enter the project name. (We have kept the project name as “SOBackgroundTask”).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx1kg6jgmu6kdkb0lpi2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx1kg6jgmu6kdkb0lpi2.png" alt="Single View App" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5bkdocrta1ux7o00u1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5bkdocrta1ux7o00u1d.png" alt="Assign Project name" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3) Go to SoBackgroundTask Target and click on “Signing &amp;amp; Capabilities”, then click on “+ Capability”,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63evmuebcz2y4q1ijbt5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63evmuebcz2y4q1ijbt5.png" alt="Set SoBackgroundTask Target" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4) Double-tap on “Background Modes”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6u9n9qhmj8oyrfz7pf5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6u9n9qhmj8oyrfz7pf5c.png" alt="Set Background Mode" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5) Select “Background Fetch” and “Background Processing” from all background tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv794ppz8v1h64qltard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv794ppz8v1h64qltard.png" alt="Select Background Mode options" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6) Add “BGTaskSchedulerPermittedIdentifiers” key in info.plist and add a task identifier array.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flq6yosus6tc80zf5ivlp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flq6yosus6tc80zf5ivlp.png" alt="Add BGTaskSchedulerPermittedIdentifiers Key" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: The system only runs the tasks registered with identifiers on a whitelist of task identifiers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;7) import BackgroundTasks in AppDelegate.swift.&lt;/p&gt;

&lt;p&gt;8) Create registerBackgroundTaks() method with identifier (use the same identifier we used in info.plist) and call it from Application:didFinishLaunchingWithOptions&lt;/p&gt;

&lt;p&gt;func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -&amp;gt; Bool {&lt;/p&gt;

&lt;p&gt;registerBackgroundTaks()&lt;br&gt;
return true&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;//MARK: Register BackGround Tasks&lt;br&gt;
private func registerBackgroundTaks() {&lt;/p&gt;

&lt;p&gt;BGTaskScheduler.shared.register(forTaskWithIdentifier: "com.SO.imagefetcher", using: nil) { task in&lt;br&gt;
//This task is cast with processing request (BGProcessingTask)&lt;br&gt;
self.scheduleLocalNotification()&lt;br&gt;
self.handleImageFetcherTask(task: task as! BGProcessingTask)&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;BGTaskScheduler.shared.register(forTaskWithIdentifier: "com.SO.apprefresh", using: nil) { task in&lt;br&gt;
//This task is cast with processing request (BGAppRefreshTask)&lt;br&gt;
self.scheduleLocalNotification()&lt;br&gt;
self.handleAppRefreshTask(task: task as! BGAppRefreshTask)&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;9) Create scheduleImagefetcher() and scheduleAppRefresh() method for fetching images from the gallery and refresh app once image fetch is completed. These methods are called from applicationDidEnterBackground.&lt;/p&gt;

&lt;p&gt;func applicationDidEnterBackground(_ application: UIApplication) {&lt;br&gt;
scheduleAppRefresh()&lt;br&gt;
scheduleImagefetcher()&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;func scheduleImagefetcher() {&lt;br&gt;
let request = BGProcessingTaskRequest(identifier: "com.SO.imagefetcher")&lt;br&gt;
request.requiresNetworkConnectivity = false // Need to true if your task need to network process. Defaults to false.&lt;br&gt;
request.requiresExternalPower = false&lt;br&gt;
//If we keep requiredExternalPower = true then it required device is connected to external power.&lt;/p&gt;

&lt;p&gt;request.earliestBeginDate = Date(timeIntervalSinceNow: 1 * 60) // fetch Image Count after 1 minute.&lt;br&gt;
//Note :: EarliestBeginDate should not be set to too far into the future.&lt;br&gt;
do {&lt;br&gt;
try BGTaskScheduler.shared.submit(request)&lt;br&gt;
} catch {&lt;br&gt;
print("Could not schedule image fetch: (error)")&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;func scheduleAppRefresh() {&lt;br&gt;
let request = BGAppRefreshTaskRequest(identifier: "com.SO.apprefresh")&lt;br&gt;
request.earliestBeginDate = Date(timeIntervalSinceNow: 2 * 60) // App Refresh after 2 minute.&lt;br&gt;
//Note :: EarliestBeginDate should not be set to too far into the future.&lt;br&gt;
do {&lt;br&gt;
try BGTaskScheduler.shared.submit(request)&lt;br&gt;
} catch {&lt;br&gt;
print("Could not schedule app refresh: (error)")&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: You need to cancel pending background tasks if any, otherwise it will display error code=2.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To cancel pending background task, we call the below method before scheduling a new task.&lt;/p&gt;

&lt;p&gt;func cancelAllPendingBGTask() {&lt;br&gt;
BGTaskScheduler.shared.cancelAllTaskRequests()&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Note: iOS background task time limit is 30 seconds, the policy is still the same.&lt;/p&gt;

&lt;h1&gt;
  
  
  Let’s Sum Up!
&lt;/h1&gt;

&lt;p&gt;We hope this iOS tutorial has helped you to understand how BackgroundTasks Framework works. You can get the source code by referring to the &lt;a href="https://github.com/spaceotech/BackgroundTask" rel="noopener noreferrer"&gt;github demo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here, we took only one example of fetching images in the background, there are various tasks for which we can use this framework.&lt;/p&gt;

&lt;p&gt;Let us know if you have any suggestions or queries in this tutorial or any questions regarding iPhone app development. We are one of the &lt;a href="https://www.spaceotechnologies.com/iphone-app-development/" rel="noopener noreferrer"&gt;leading iPhone app development companies&lt;/a&gt; and already developed over 2500 iOS apps successfully.&lt;/p&gt;

&lt;p&gt;So, if you have an iPhone app idea or want to create a mobile application on the iOS platform with advanced features and functionalities like this one, discuss it with us through our contact form. We are all ears!&lt;/p&gt;

</description>
      <category>ios</category>
      <category>tutorial</category>
      <category>backgroundtask</category>
      <category>taskscheduler</category>
    </item>
    <item>
      <title>Android Tutorial: How to reduce Android app size?</title>
      <dc:creator>Amit Makhija</dc:creator>
      <pubDate>Thu, 11 Jul 2019 05:24:24 +0000</pubDate>
      <link>https://dev.to/amit-makhija30/how-to-reduce-android-app-size-android-tutorial-578i</link>
      <guid>https://dev.to/amit-makhija30/how-to-reduce-android-app-size-android-tutorial-578i</guid>
      <description>&lt;p&gt;&lt;em&gt;Do you know about 70% of people in emerging markets consider the size of an app before downloading it because they are concerned about data cost and phone storage space.? Are you facing trouble while uploading your apps because of their large size? Here’s a tutorial explaining how you can reduce android app size at the developer’s level as well as how Android App Bundle makes this task easy.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you will learn?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;How to reduce android app size at the developer’s level by,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Removing unused resources, code, and classes and reusing resources wherever possible.&lt;/li&gt;
&lt;li&gt;Reducing the size of images&lt;/li&gt;
&lt;li&gt;Reducing the size of native and Java codebase&lt;/li&gt;
&lt;li&gt;A brief explanation of the role of Android app bundle in reducing app size&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the application is large, people would rather not download it all. Let’s imagine that this message pops up when a user installs your app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1w7xba2bolqmgiowuhl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1w7xba2bolqmgiowuhl.jpg" alt="Insufficient Storage" width="480" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The user thinks they’re probably better off without the app. Who would go through all the trouble of finding and deleting unused files and apps for just one app? Even if they have enough storage but the internet speed is slow, they might get bored of the time it is taking to download and stop the process in between.&lt;/p&gt;

&lt;p&gt;This is why it is extremely important to develop an app with high performance, quality design and seamless user experience that also doesn’t take up too much space. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The question is – how?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Android developers at Space-O Technologies receive a lot of queries about how they build effective small-sized apps, and this inspired us to write a practical Android tutorial on &lt;a href="https://www.spaceotechnologies.com/reduce-android-app-size-tutorial/" rel="noopener noreferrer"&gt;how to reduce Android app size&lt;/a&gt;. We will first get to the methods that a developer should take while developing an app and then go on to understand about Android App Bundle.&lt;/p&gt;

&lt;p&gt;Before reducing app size, let’s analyze app size using APK analyzer and understand the APK structure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2py32600s788uwsxj6e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2py32600s788uwsxj6e8.png" alt="Analyze app size" width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Directories&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;META-INF/ - Contains the CERT.SF and CERT.RSA signature files, as well as the MANIFEST.MF manifest file.&lt;br&gt;
assets/ - All the app’s assets are contained by this. The app can retrieve them using an AssetManager object.&lt;br&gt;
res/ - This contains resources that aren’t compiled into resources.arsc.&lt;br&gt;
lib/ - This contains the compiled code that is specific to the software layer of a processor. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;resources.arsc - It contains compiled resources. All the language strings and styles, layout files and images are included in this. Basically, it contains the XML content from all configurations of the res/values/ folder.&lt;/p&gt;

&lt;p&gt;classes.dex - It contains the classes compiled in the DEX file format understood by the Dalvik/ART virtual machine. You may have multiple .dex files if the number of methods is beyond 64K&lt;/p&gt;

&lt;p&gt;AndroidManifest.xml: It has the core Android manifest file that lists the name, version, access rights, and referenced library files of the app. The file uses Android’s binary XML format.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to reduce Android app size or APK size
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;1.Remove unused resources, code, and classes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The lint tool detects resources in the res/ folder that your code doesn’t reference. The lint tool only detects potentially unused resources and prints a message like this,&lt;/p&gt;

&lt;p&gt;res/layout/preferences.xml: Warning: The resource R.layout.preferences appears&lt;br&gt;
    to be unused [UnusedResources]&lt;/p&gt;

&lt;p&gt;Enable resource shrinking and shrink code using proguard. By setting minifyEnabled to true, proguard removes all the unused methods and slims down the classes.dex file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxhggkkrgcidzahwahzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxhggkkrgcidzahwahzv.png" alt="Enable resource shrinking" width="766" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also reuse resources. Instead of using different resources, utilize one resource in different manners. Here is an example from the Android Developers site,&lt;/p&gt;

&lt;p&gt;&amp;lt;?xml version="1.0" encoding="utf-8"?&amp;gt;&lt;br&gt;

    android:drawable="@drawable/ic_thumb_up"&lt;br&gt;
    android:pivotX="50%"&lt;br&gt;
    android:pivotY="50%"&lt;br&gt;
    android:fromDegrees="180" /&amp;gt;&lt;/p&gt;

&lt;p&gt;If your application requires only one language, for example, English, then you should use resConfig. Enabling resConfig will only add English resources in your apk file. Here is how you can do it,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjeoncwy3c22ms6h9wrd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjeoncwy3c22ms6h9wrd6.png" alt="Enabling resConfig" width="736" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Reduce the size of images&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This can be done in many different ways, but we will discuss only a few main ones&lt;/p&gt;

&lt;p&gt;a) You could use drawable shapes like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxk4226abexdbr1l5z2sw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxk4226abexdbr1l5z2sw.png" alt="use drawable shapes" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;b) Use .webp image format: Convert the .jpg and .png images to .webp format. You can see that there is no reduction in image quality but the size is considerably reduced.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmda63g7e0htqbiymdz50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmda63g7e0htqbiymdz50.png" alt="Use webp image format" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;c) Compress PNG and JPG files&lt;/p&gt;

&lt;p&gt;To compress PNG files, you may use tools like pngcrush, pngquant, or zopflipng. &lt;/p&gt;

&lt;p&gt;To compress JPG files, tools like packJPG and guetzli are very effective.&lt;/p&gt;

&lt;p&gt;d) Use vector graphics for simple images as they utilize space&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hiJRRHpI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/kdzmcvrbodirr6mq8b8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hiJRRHpI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/kdzmcvrbodirr6mq8b8p.png" alt="Use vector graphics" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;e) Use 9-patch images to save a lot of space. The Draw 9-patch tool is used to create bitmap images that automatically resize to accommodate the contents of the view and the size of the screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftaw4wahrgistym8ysnhc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftaw4wahrgistym8ysnhc.png" alt="Use Nine-Patch images" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Reduce the size of native and Java codebase&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This can be done in several ways,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove unnecessary code that is generated automatically.&lt;/li&gt;
&lt;li&gt;Avoid enumerations:  Use the @IntDef annotation and ProGuard to strip enumerations out and convert them to integers.&lt;/li&gt;
&lt;li&gt;Reduce the size of native binaries: Two ways to do this&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;– Remove debug symbols: Use the arm-eabi-strip tool, provided in the Android NDK, to remove unnecessary debug symbols. Always publish your app in the release version.&lt;/p&gt;

&lt;p&gt;– Avoid extracting native libraries:  By setting android:extractNativeLibs=”false” prevents PackageManager from copying .so files from the APK to the filesystem during installation. This makes updates of your app smaller.&lt;/p&gt;

&lt;p&gt;This is what a developer can do on his level, but there is a lot more in store for &lt;a href="https://www.spaceo.ca/android-app-development-company/" rel="noopener noreferrer"&gt;Android app developers&lt;/a&gt;. We’re talking about the Android app bundle format. What’s that? &lt;/p&gt;

&lt;p&gt;With an increase in features, Android app sizes kept increasing which led to a decrease in the number of installs. This is why Android studio came up with Android App Bundles.&lt;/p&gt;

&lt;p&gt;What is the Android App Bundle (.aab) format? Why use Android App Bundle?&lt;/p&gt;

&lt;p&gt;What is the Android App Bundle format?&lt;/p&gt;

&lt;p&gt;Initially, developers could publish their apps in the .apk format in two ways,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Either as a single file with all the code and resources for the different device configurations that your app supports.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Or as multiples files individually handling different device configurations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Publishing an app with .aab format is the third option that is provided by the Android studio. Android app bundle is the new publishing format for Android apps. It is the best format if you want to reduce Android app size.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why use Android App Bundle?
&lt;/h1&gt;

&lt;p&gt;Android App Bundle serves only the code and resources a user needs to run your app on their specific device. Using this format you could have 35% size savings as compared to universal APK. You don’t need to use incomplete solutions like multi-APK thus saving you a lot of time and effort.&lt;/p&gt;

&lt;p&gt;The app bundle uses the Dynamic Delivery serving model to reduce Android app size. App size is reduced in two main ways,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Using Configuration APK&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Upload your app with the .aab format. Google Play will use this to generate 2 APKs: a single Base APK and multiple Configuration APKs. Base APK contains all the code and resources required to deliver your app’s base functionality. A user will first receive this APK and ever subsequent APK will depend on this. This is generated from your project’s app or base module. &lt;/p&gt;

&lt;p&gt;After this, Google Play will scan the device configuration to generate Configuration APKs. Every time, the app is downloaded on a device, the configuration is scanned and Dynamic Delivery model comes into play. Only a customized configuration APK is downloaded on the device with respect to its individual configuration.&lt;/p&gt;

&lt;p&gt;Thus only limited space is required for the app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fms5rndd1g72u25fz7cfs.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fms5rndd1g72u25fz7cfs.gif" alt="Dynamic Delivery model" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Using on-demand features&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you wish to create a modular app, like a gaming app, then this one is for you. There are features in your application not required by many of your users, they may only download required features. You can distribute your features into dynamic feature modules which decreases the size of your APK, making it a dynamic Android app.&lt;/p&gt;

&lt;p&gt;When a user requires a feature of one of these modules, he will install apks and Dynamic Delivery will serve a dynamic feature APK compiling the resources and code to run that specific feature.&lt;/p&gt;

&lt;h1&gt;
  
  
  Let’s Sum Up!
&lt;/h1&gt;

&lt;p&gt;In this Android tutorial, we learned how to,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove unused resources, code, and classes. If possible, reuse resources wherever possible.&lt;/li&gt;
&lt;li&gt;Reduce the size of images using drawable shapes, vector images, 9-patch images or convert to webp format or compress png and jpg images&lt;/li&gt;
&lt;li&gt;Reduce the size of native and Java codebase&lt;/li&gt;
&lt;li&gt;Use Android app bundle&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We hope that this tutorial helps you to reduce your app size and you consider to develop small-sized feature-rich apps.&lt;/p&gt;

&lt;p&gt;If you know any other simpler or faster way to reduce Android app size, please share it in the comment section. &lt;/p&gt;

</description>
      <category>android</category>
      <category>tutorial</category>
      <category>androidapp</category>
      <category>appsize</category>
    </item>
  </channel>
</rss>
