Audio and Video Features

Last Updated on : 2024-06-04 10:13:21download

The IPC SDK can provide a bunch of audio and video capabilities in addition to live video streaming and playing back footage on an SD card.

Local recording

During live video streaming or record playback, the ongoing videos can be recorded on a mobile phone.

API description

Records videos and saves them to the album of a mobile phone.

- (void)startRecord;

API description

Saves video records to a specified path.

- (void)startRecordWithFilePath:(NSString *)filePath;

Parameters

Parameter Description
filePath The file path to save video records as MP4 files that are suffixed with .mp4.

API description

Stops recording and saves the records.

- (void)stopRecord;

API description

The delegate callback to invoke after video recording is started.

- (void)cameraDidStartRecord:(id<ThingSmartCameraType>)camera;

API description

The delegate callback to invoke after video recording is stopped and records are saved.

- (void)cameraDidStopRecord:(id<ThingSmartCameraType>)camera;

The call to start or stop recording might fail. In this case, call the delegate method -(void)camera:didOccurredErrorAtStep: specificErrorCode:; to return specific error messages.

Example

Objective-C:

- (void)startRecord {
    if (self.isRecording) {
        return;
    }
    // Video recording can be enabled only during video streaming.
    if (self.previewing || self.playbacking) {
        [self.camera startRecord];
        self.recording = YES;
    }
}

- (void)stopRecord {
    if (self.isRecording) {
        [self.camera stopRecord];
        self.recording = NO;
    }
}

- (void)cameraDidStartRecord:(id<ThingSmartCameraType>)camera {
        // Starts video recording and updates the UI.
}

- (void)cameraDidStopRecord:(id<ThingSmartCameraType>)camera {
        // Stops video recording and saves records.
}

- (void)camera:(id<ThingSmartCameraType>)camera didOccurredErrorAtStep:(TYCameraErrorCode)errStepCode specificErrorCode:(NSInteger)errorCode {
    // Failed to start or stop video recording.
    if (errStepCode == Thing_ERROR_RECORD_FAILED) {
        self.recording = NO;
    }
}

Swift:

func startRecord() {
    if self.isRecording {
        return
    }
    guard self.isPreviewing || self.isPlaybacking else {
        return;
    }
    self.camera.startRecord()
    self.isRecording = true
}

func stopRecord() {
    guard self.isRecording else {
        return
    }
    self.camera.stopRecord()
}

func cameraDidStartRecord(_ camera: ThingSmartCameraType!) {
    // Starts video recording and updates the UI.
}

func cameraDidStopRecord(_ camera: ThingSmartCameraType!) {
    // Stops video recording and saves records.
}

func camera(_ camera: ThingSmartCameraType!, didOccurredErrorAtStep errStepCode: ThingCameraErrorCode, specificErrorCode errorCode: Int) {
    // Failed to start or stop video recording.
    if errStepCode == Thing_ERROR_RECORD_FAILED {
        self.isRecording = false
    }
}
  • During video recording, do not switch between video definition modes or modify the audio channel switch and live video talk settings. Otherwise, the recording might fail. We recommend that you disable the settings of the video definition modes, audio channel switch, and live video talk during video recording.
  • The video length might be less than the recording duration in certain conditions. For example, the video keeps freezing, or video frames are saved starting from the key frame next to the start point. The recording duration is provided for reference only.

Video screenshots

During live video streaming or record playback, users can capture screenshots of video views. The IPC SDK provides three methods to capture screenshots. The following two methods are provided by the ThingSmartCameraType object:

API description

Captures a video screenshot and saves it to the album of a mobile phone.

- (UIImage *)snapShoot;

Return value

Type Description
UIImage The UIImage object of a video screenshot. If nil is returned, the system failed to save the screenshot.

API description

Captures a video screenshot and saves it to a specific directory. This API method is not supported by P2P 1.0 devices.

- (UIImage *)snapShootSavedAtPath:(NSString *)filePath thumbnilPath:(NSString *)thumbnilPath;

Parameters

Parameter Description
filePath The path to save the screenshot.
thumbnilPath The path to save the thumbnail. Set the value to nil if this parameter is not required.

Return value

Type Description
UIImage The UIImage object of a video screenshot. If nil is returned, the system failed to save the screenshot.

API description

Calls the screenshot capturing API method of ThingSmartVideoType that processes video rendering views. Only the UIImage object is returned, and the screenshots are not automatically saved.

- (UIImage *)screenshot;

Return value

Type Description
UIImage The UIImage object of a video screenshot. If nil is returned, the system failed to save the screenshot.

Example

Objective-C:

- (void)snapShoot {
    // Screenshots can be captured only during video playing.
    if (self.previewing || self.playbacking) {
        if ([self.camera snapShoot]) {
            // Saves screenshots to the album of a mobile phone.
        }
    }
}

Swift:

func snapShoot() {
    guard self.isPreviewing || self.isPlaybacking else {
        return;
    }
    if let _ = self.camera.snapShoot() {
        // Saves screenshots to the album of a mobile phone.
    }
}

To enable video recording or screenshot capturing, the app must be granted access to the album of the mobile phone. Otherwise, the app might crash.

Audio settings

During live video streaming or record playback, the audio channel can be enabled or disabled. By default, it is disabled.

API description

Enables or disables the audio channel.

- (void)enableMute:(BOOL)mute forPlayMode:(ThingSmartCameraPlayMode)playMode;

Parameters

Parameter Description
mute Specifies whether to mute videos. Valid values:
  • YES: mutes videos.
  • NO: unmutes videos.
playMode The current playing mode.

API description

The delegate callback of the audio status.

- (void)camera:(id<ThingSmartCameraType>)camera didReceiveMuteState:(BOOL)isMute playMode:(ThingSmartCameraPlayMode)playMode;

Parameters

Parameter Description
camera The Camera object that enables or disables the audio channel.
isMute Indicates whether videos are muted.
playMode The current playing mode.

API description

Switches between the speaker and earpiece modes. If the return value is not 0, the switching operation failed. This API method is not supported by P2P 1.0 devices.

- (int)enableSpeaker:(BOOL)enabled;

Parameters

Parameter Description
enabled
  • YES: switches to the speaker mode.
  • NO: switches to the earpiece mode.

API description

Returns the current audio playing mode. This API method is not supported by P2P 1.0. Valid values:

  • YES: the speaker mode.
  • NO: the earpiece mode.

- (BOOL)speakerEnabled;

After the system switches between live video streaming and video playback, the IPC SDK does not memorize the audio status of the previous playing mode. For example,

  • The audio channel is enabled for live video streaming. After the playing mode is switched to video playback, videos are still audible.
  • Then, disable the audio channel and switch back to the live video streaming mode. Videos are thus inaudible.

Therefore, after switching between the playing modes, you must adjust the audio status as expected.

Video bitrate

Live video bitrate can be obtained after videos are streamed.

API description

Returns the live video bitrate.

- (double)getVideoBitRateKBPS;

Live video talk

After a P2P connection is created, the live video talk feature can be enabled to talk to an IP camera (IPC). Before the talk, the app must be granted access to the microphone of the mobile phone.

Determine support for video talk

Indicates whether the device is equipped with a speaker. If so, the device supports video talk. If the device is equipped with both a speaker and a pickup, video streams are audible, so two-way talk is supported. You can query device capabilities to check whether the device supports a speaker or pickup.

Two-way talk

During live video streaming, the audio channel can be enabled. In this case, the audible sound is the human voice and ambient sound collected by the IPC in real time. Then, enable the audio channel from the app to the IPC, so two-way talk can be implemented.

Certain IPCs might not have speakers or pickups. Such cameras do not support two-way talk.

One-way talk

The control of one-way talk is subject to your implementation. After the one-way talk is enabled, the video is muted. After the one-way talk is disabled, the video is unmuted.

API description

Enables the audio channel from the app to the IPC.

- (void)startTalk;

API description

Disables the audio channel from the app to the IPC.

- (void)stopTalk;

API description

The delegate callback to invoke after the audio channel from the app to the IPC is enabled.

- (void)cameraDidBeginTalk:(id<ThingSmartCameraType>)camera;

API description

The delegate callback to invoke after the audio channel from the app to the IPC is disabled.

- (void)cameraDidStopTalk:(id<ThingSmartCameraType>)camera;

Example

The following example of one-way talk shows the API calls to use the audio switch and live video talk.

Objective-C:

- (void)startTalk {
    [self.camera startTalk];
    // Disables the audio channel if videos are not muted.
    if (!self.isMuted) {
        [self.camera enableMute:YES forPlayMode:ThingSmartCameraPlayModePreview];
    }
}

- (void)stopTalk {
    [self.camera stopTalk];
}

- (void)cameraDidBeginTalk:(id<ThingSmartCameraType>)camera {
        // Starts live video talk.
}

- (void)cameraDidStopTalk:(id<ThingSmartCameraType>)camera {
        // Stops live video talk.
        // Enables the audio channel if videos are muted.
    if (self.isMuted) {
        [self.camera enableMute:NO forPlayMode:ThingSmartCameraPlayModePreview];
    }
}

- (void)camera:(id<ThingSmartCameraType>)camera didReceiveMuteState:(BOOL)isMute playMode:(ThingSmartCameraPlayMode)playMode {
        // Receives the audio status changes and updates the UI.
        self.isMuted = isMute;
}

- (void)camera:(id<ThingSmartCameraType>)camera didOccurredErrorAtStep:(ThingCameraErrorCode)errStepCode specificErrorCode:(NSInteger)errorCode {
    if (errStepCode == Thing_ERROR_START_TALK_FAILED) {
            // Failed to start video talk and enables the audio channel again.
                if (self.isMuted) {
                [self.camera enableMute:NO forPlayMode:ThingSmartCameraPlayModePreview];
            }
    }
    else if (errStepCode == Thing_ERROR_ENABLE_MUTE_FAILED) {
                // Failed to enable the muted state.
    }
}

Swift:

// The live video streaming mode.
func startTalk() {
    self.camera.startTalk()
    guard self.isMuted else {
        self.camera.enableMute(true, for: .preview)
        return
    }
}

func stopTalk() {
    self.camera.stopTalk()
}

func cameraDidBeginTalk(_ camera: ThingSmartCameraType!) {
    // Starts live video talk.
}

func cameraDidStopTalk(_ camera: ThingSmartCameraType!) {
    // Stops live video talk.
    if self.isMuted {
        self.camera.enableMute(false, for: .preview)
    }
}

func camera(_ camera: ThingSmartCameraType!, didReceiveMuteState isMute: Bool, playMode: ThingSmartCameraPlayMode) {
    self.isMuted = isMute
    // Receives the audio status changes and updates the UI.
}

func camera(_ camera: ThingSmartCameraType!, didOccurredErrorAtStep errStepCode: ThingCameraErrorCode, specificErrorCode errorCode: Int) {
    if errStepCode == Thing_ERROR_START_TALK_FAILED {
        // Failed to start video talk and enables the audio channel again.
        self.camera.enableMute(false, for: .preview)
    }else if errStepCode == Thing_ERROR_ENABLE_MUTE_FAILED {
        // Failed to enable the muted state.
    }
}

Switch between video definition modes

Users can switch between definition modes during live video streaming. Currently, only high definition (HD) and standard definition (SD) modes are available, and they are supported only when live videos are streamed. A few IPCs only support one of the modes. Only one video definition mode is supported for storing footage on the SD card.

API description

Returns the definition mode of current video images by using a delegate method.

- (void)getHD;

API description

Switches between the video definition modes. Valid values:

  • YES: switches to the HD mode.
  • NO: switches to the SD mode.

    - (void)enableHD:(BOOL)hd;
    

    Parameters

    Parameter Description
    hd Specifies whether to enable the high-definition (HD) video quality. Valid values:
    • YES: HD mode.
    • NO: SD mode.

    API description

    The delegate callback to invoke after the video definition mode is changed.

    - (void)camera:(id<ThingSmartCameraType>)camera didReceiveDefinitionState:(BOOL)isHd;
    

    Parameters

    Parameter Description
    camera The Camera object that switches between the definition modes.
    isHd Specifies whether the HD mode is enabled. Valid values:
    • YES: HD mode.
    • NO: SD mode.

    The preceding API methods are supported by P2P 1.0 devices. However, in v3.20.0, these API methods are deprecated. You can use the following three API methods that are not supported by P2P 1.0 devices.

    API description

    Returns video definition.

    - (void)getDefinition;
    

    API description

    Sets video definition.

    - (void)setDefinition:(ThingSmartCameraDefinition)definition;
    

    API description

    The delegate callback to invoke after the video definition mode is changed.

    - (void)camera:(id<ThingSmartCameraType>)camera definitionChanged:(ThingSmartCameraDefinition)definition;
    

    Enum values of ThingSmartCameraDefinition

    Value Description
    ThingSmartCameraDefinitionProflow Bandwidth efficient streaming
    ThingSmartCameraDefinitionStandard SD
    ThingSmartCameraDefinitionHigh HD
    ThingSmartCameraDefinitionSuper Ultra HD
    ThingSmartCameraDefinitionSSuper Super ultra HD

    The supported definition modes vary depending on different devices. Currently, generic devices only support SD and HD.

    Example

    Objective-C:

    - (void)changeHD {
            ThingSmartCameraDefinition definition = self.HD ? ThingSmartCameraDefinitionStandard : ThingSmartCameraDefinitionHigh;
            [self.camera setDefinition:definition];
    }
    
    // The delegate callback to invoke after the video resolution is changed. It is also executed when live video streaming or video playback is started.
    - (void)camera:(id<ThingSmartCameraType>)camera resolutionDidChangeWidth:(NSInteger)width height:(NSInteger)height {
            // Returns the current definition mode.
            [self.camera getDefinition];
    }
    
    // The delegate callback to invoke when the video definition mode is changed.
    - (void)camera:(id<ThingSmartCameraType>)camera definitionChanged:(ThingSmartCameraDefinition)definition {
        self.HD = definition >= ThingSmartCameraDefinitionHigh;
    }
    
    - (void)camera:(id<ThingSmartCameraType>)camera didOccurredErrorAtStep:(TYCameraErrorCode)errStepCode specificErrorCode:(NSInteger)errorCode {
        if (errStepCode == Thing_ERROR_ENABLE_HD_FAILED) {
                    // Failed to switch between the video definition modes.
        }
    }
    

    Swift:

    func changeHD() {
        let definition = self.isHD ? ThingSmartCameraDefinition.standard : ThingSmartCameraDefinition.high
        self.camera.setDefinition(definition)
    }
    
    // The delegate callback to invoke after the video resolution is changed. It is also executed when live video streaming or video playback is started.
    func camera(_ camera: ThingSmartCameraType!, resolutionDidChangeWidth width: Int, height: Int) {
        // Returns the current definition mode.
        self.camera.getDefinition()
    }
    
    func camera(_ camera: ThingSmartCameraType!, definitionChanged definition: ThingSmartCameraDefinition) {
            self.isHD = definition.rawValue >= ThingSmartCameraDefinition.high.rawValue
    }
    
    func camera(_ camera: ThingSmartCameraType!, didOccurredErrorAtStep errStepCode: ThingCameraErrorCode, specificErrorCode errorCode: Int) {
        if errStepCode == Thing_ERROR_ENABLE_HD_FAILED {
            // Failed to switch between the video definition modes.
        }
    }
    

    Raw stream data

    The IPC SDK provides the delegate callback that returns raw stream data, including the YUV data of video frames. YUV 420SP is used as the color encoding format. This format is implemented by kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange for iOS.

    API description

    The delegate callback to invoke when raw stream data of video frames is returned.

    - (void)camera:(id<ThingSmartCameraType>)camera thing_didReceiveVideoFrame:(CMSampleBufferRef)sampleBuffer frameInfo:(ThingSmartVideoFrameInfo)frameInfo;
    

    Parameters

    Parameter Description
    camera The Camera object that receives video data.
    sampleBuffer The YUV data of video frames.
    frameInfo The information about video frames.

    ThingSmartVideoFrameInfo structure

    Field Type Description
    nWidth int The width of video images.
    nHeight int The height of video images.
    nFrameRate int The frame rate of the video.
    nTimeStamp unsigned long long The timestamp of a video frame.
    nDuration unsigned long long The total duration of the video attached to an alert. Unit: milliseconds.
    nProgress unsigned long long The time point of a video frame in the video attached to an alert. Unit: milliseconds.

    You can render video images with your own method, or further process video images. For this purpose, set the autoRender property of the ThingSmartCameraType object to NO, and implement the delegate method. This way, the IPC SDK does not automatically render video images.

    You can force sampleBuffer into CVPixelBufferRef. To asynchronously process video frame data, remember to retain the pixel buffer. Otherwise, the video frame data will be released after the delegate method is executed. This will cause a wild pointer exception during asynchronous processing.

    Example

    Objective-C:

    - (void)camera:(id<ThingSmartCameraType>)camera thing_didReceiveVideoFrame:(CMSampleBufferRef)sampleBuffer frameInfo:(ThingSmartVideoFrameInfo)frameInfo {
        CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)sampleBuffer;
        // Retains pixelBuffer to avoid unexpected release.
        CVPixelBufferRetain(pixelBuffer);
        dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
            // Processes and renders pixelBuffer.
            // ...
            // Releases pixelBuffer.
            CVPixelBufferRelease(pixelBuffer);
        });
    }
    

    Swift:

    func camera(_ camera: ThingSmartCameraType!, thing_didReceiveVideoFrame sampleBuffer: CMSampleBuffer!, frameInfo: ThingSmartVideoFrameInfo) {
            // Processes and renders video data.
    }
    

    Intelligent video analytics

    If the IPC detects an object in motion during live video streaming, the intelligent video analytics (IVA) feature allows the object to be framed automatically in white on the view.

    To achieve this purpose, the IVA feature must be enabled for the IPC first. It will then report the coordinates of the object along with the video frames. You can use the data point (DP) ID 198 (ipc_object_outline) to enable this feature. For more information about device control API methods, see Device Control.

    After IVA is enabled for the IPC, this feature must also be enabled for the IPC SDK during live video streaming. This allows the SDK to frame the object in white on the view based on the received coordinates of the object.

    The details of IVA DPs will not be described in this topic. You can follow the respective convention on the device.

    Enable IVA

    API description

    Enables or disables the IVA feature. This API method is usually called after startPreview for video previewing or setDefinition: for definition settings.

    - (void)setOutLineEnable:(BOOL)enable;
    

    Parameters

    Parameter Description
    enable Specifies whether to enable IVA.

    Configure IVA properties

    You can configure the properties of IVA to control its style, such as the frame color, brush width, and flash frequency.

    API description

    Defines the properties of IVA in the specified format of JSON strings based on the Supplemental Enhancement Information (SEI) reported by the device.

    - (int)setSmartRectFeatures:(NSString *)features;
    

    Parameters

    The parameter features is a JSON string in the following format:

    {
        "SmartRectFeature":[
            {
                "type":0,
                "index":0,
                "brushWidth":1,
                "flashFps":{
                    "drawKeepFrames":2,
                    "stopKeepFrames":2
                },
                "rgb":0xFF0000,
                "shape":0
            },
            {
                "type":0,
                "index":1,
                "brushWidth":2,
                "flashFps":{
                    "drawKeepFrames":3,
                    "stopKeepFrames":2
                },
                "rgb":0x00FF00,
                "shape":1
            }
        ]
    }
    
    Parameter Type Description
    SmartRectFeature Array (Required) The identifier in the fixed format of arrays to represent multiple frame settings.
    type Int The type of frame. Valid values:
    • 0: smart bounding box (default)
    • 1: Line crossing detection
    index Int The index of the frame, corresponding to each ID in od of SEI.
    shape Int The shape of the rectangular frame. Valid values:
    • 0: closed rectangle
    • 1: four corners only
    rgb Int The color of the rectangular frame, represented by the red-green-blue (RGB) color model. Value range: 0x000000 to 0xFFFFFF. Default value: 0xFC4747.
    brushWidth Int The brush stroke of the rectangular frame. Valid values:
    • 0: thin
    • 1: medium
    • 2: thick
    flashFps String The flash frequency of the rectangular frame. Valid values:
    • drawKeepFrames: the number of frames with drawing on.
    • stopKeepFrames: the number of frames with drawing off.

    Define SEI protocol

    The protocol that governs communication with the device. The IPC SDK parses the data over SEI and implements IVA at the positions with the specified properties.

    {
        "AGTX":{
            "time":6885,
            "chn":0,
            "key":1,
            "iva":{
                "od":[
                    {
                        "obj":{
                            "id":0,
                            "type":1,
                            "twinkle":1,
                            "rect":[
                                0,0,
                                25,25,
                                50,50,
                                80, 80,
                                100,100
                            ],
                            "vel":[0,10],
                            "cat":"PEDESTRIAN"
                        }
                    },
                    {
                        "obj":{
                            "id":1,
                            "type":1,
                            "twinkle":1,
                            "rect":[
                                0,0,
                                100,100
                            ],
                            "vel":[0,10],
                            "cat":"PEDESTRIAN"
                        }
                    }
                ]
            }
        }
    }
    

    The following table describes the parsed parameters in iva reported by the device.

    Parameter Description
    id The index of each frame.
    type The type of frame. Valid values:
    • 0: smart bounding box
    • 1: four corners only
    twinkle The flash setting. Valid values:
    • 0: The frame is displayed, but flash is disabled.
    • 1: The frame flashes at the frequency passed in.
    rect Each pair of numbers represents the coordinates of a point. All points are arranged in clockwise order. The SDK draws these points into a closed polygon. Two points are used to represent a rectangular box.

    The coordinates of the rect points in the SEI protocol are typically passed from the app to the device in the format of the device control data point (DP). Coordinate anchor data follows these rules:

    • An even-numbered position represents the numerator of a percentage value on the horizontal (x) axis.

    • An odd-numbered position represents the numerator of a percentage value on the vertical (y) axis.

      The maximum value is 100, and even and odd positions appear in pairs.

    Video rendering

    After video decoding, rendered images can be further processed by settings. For example, the following features are supported: stretching or scaling, horizontal or vertical mirroring, and rotation by 90, 180, or 270 degrees.

    API description

    - (int)setDeviceFeatures:(NSDictionary *)features;
    

    Parameters

    The parameter features is the directory type of data in the following format:

    {
        "DecPostProcess":{
            "video":[
                {
                    "restype":"4",
                    "oldres":"944*1080",
                    "newres":"1920*1080"
                },
                {
                    "restype":"2",
                    "oldres":"944*1080",
                    "newres":"1920*1080"
                }
            ],
            "mirror":0,
            "rotation":2
        }
    }
    
    Parameter Description
    DecPostProcess (Required) The identifier in the fixed format.
    video The array of video resolution settings. The following settings are supported:
    • restype: the type of video resolution.
    • oldres: the size of the original video reported by the device.
    • newres: the expected size of the video after stretching or scaling.
    mirror The mirroring setting. Valid values:
    • 0: unchanged.
    • 1: horizontal mirroring.
    • 2: vertical mirroring.
    rotation The angle of rotation. Valid values:
    • 0: no rotation.
    • 1: rotate 90 degrees.
    • 2: rotate 180 degrees.
    • 3: rotate 270 degrees.

    Device capabilities

    The ThingSmartCameraAbility class can be used to parse device configurations and get basic device capabilities.

    Property Description
    defaultDefinition Indicates the default definition of live video streaming.
    videoNum Indicates the number of streams supported by the device.
    • If the value is 1, the device only supports one definition mode. Switching between definition modes is not supported. The defaultDefinition property indicates the only definition mode.
    • If the value is 2, the device supports SD and HD.
    isSupportSpeaker Indicates whether the device is equipped with a speaker. If so, the device supports video talk.
    isSupportPickup Indicates whether the device is equipped with a pickup. If so, the audio channel can be enabled when video streams are previewed on the app.
    rowData Returns raw data of P2P configurations.

    API description

    Creates the device capabilities class object based on a device data model. This API method is called after a P2P connection is created. After the initial P2P connection, the raw data of P2P configurations is cached in a local sandbox.

    + (instancetype)cameraAbilityWithDeviceModel:(ThingSmartDeviceModel *)deviceModel;