Last Updated on : 2024-09-14 06:08:31download
The Smart Camera SDK can provide a bunch of audio and video capabilities in addition to live streaming and playback from the memory card.
API description
Bind with a custom video rendering view.
- (void)registerVideoRenderView:(UIView<ThingSmartVideoViewType> *)videoView;
API description
Unbind from a custom video rendering view.
- (void)uninstallVideoRenderView:(UIView<ThingSmartVideoViewType> *)videoView;
Parameter | Description |
---|---|
videoView | A custom video rendering view, inheriting directly from ThingSmartMediaVideoView . To fully customize the rendering view, you must follow the ThingSmartVideoViewType protocol. |
API description
Bind with a custom local video rendering view.
- (void)bindLocalVideoView:(UIView<ThingSmartVideoViewType> *)videoView;
API description
Unbind from a custom local video rendering view.
- (void)unbindLocalVideoView:(UIView<ThingSmartVideoViewType> *)videoView;
Parameter | Description |
---|---|
videoView | A custom video rendering view, inheriting directly from ThingSmartMediaVideoView . To fully customize the rendering view, you must follow the ThingSmartVideoViewType protocol. |
During live streaming or record playback, the ongoing video can be recorded on a mobile phone.
API description
Record videos and save them to the specified location.
- (void)startRecordWithFilePath:(NSString *)filePath;
Parameters
Parameter | Description |
---|---|
filePath | The file path to save video records as MP4 files that are suffixed with .mp4 . |
API description
Stop recording and save the records.
- (void)stopRecord;
API description
The delegate callback to be invoked when the video recording is started.
- (void)cameraDidStartRecord:(id<ThingSmartCameraType>)camera;
API description
The delegate callback to be invoked when video recording is stopped and records are saved.
- (void)cameraDidStopRecord:(id<ThingSmartCameraType>)camera;
The call to start or stop recording might fail. In this case, the delegate method -(void)camera:didOccurredErrorAtStep: specificErrorCode:extErrorCodeInfo:;
will return the error message.
Example
Objective-C:
- (void)startRecord {
if (self.isRecording) {
return;
}
// Video recording can be enabled only during video streaming.
if (self.previewing || self.playbacking) {
[self.camera startRecordWithFilePath:filePath];
self.recording = YES;
}
}
- (void)stopRecord {
if (self.isRecording) {
[self.camera stopRecord];
self.recording = NO;
}
}
- (void)cameraDidStartRecord:(id<ThingSmartCameraType>)camera {
// Starts video recording and updates the UI.
}
- (void)cameraDidStopRecord:(id<ThingSmartCameraType>)camera {
// Stops video recording and saves records.
}
- (void)camera:(id<ThingSmartCameraType>)camera didOccurredErrorAtStep:(ThingCameraErrorCode)errStepCode specificErrorCode:(NSInteger)errorCode extErrorCodeInfo:(id<ThingSmartCameraExtErrorCodeInfo>)extErrorCodeInfo{
// Failed to start or stop video recording.
if (errStepCode == Thing_ERROR_RECORD_FAILED) {
self.recording = NO;
}
}
Swift:
func startRecord() {
if self.isRecording {
return
}
guard self.isPreviewing || self.isPlaybacking else {
return;
}
self.camera.startRecord(withFilePath: filePath)
self.isRecording = true
}
func stopRecord() {
guard self.isRecording else {
return
}
self.camera.stopRecord()
}
func cameraDidStartRecord(_ camera: ThingSmartCameraType!) {
// Starts video recording and updates the UI.
}
func cameraDidStopRecord(_ camera: ThingSmartCameraType!) {
// Stops video recording and saves records.
}
func camera(_ camera: ThingSmartCameraType!, didOccurredErrorAtStep errStepCode: ThingCameraErrorCode, specificErrorCode errorCode: Int, extErrorCodeInfo: ThingSmartCameraExtErrorCodeInfo!) {
// Failed to start or stop video recording.
if errStepCode == Thing_ERROR_RECORD_FAILED {
self.isRecording = false
}
}
During live streaming or record playback, users can take a screenshot from the ongoing video. Three methods are available to take screenshots. Two of the following methods are provided by the ThingSmartCameraType
object.
API description
Take a screenshot from the video and save it to the album of the mobile phone.
- (UIImage *)snapShoot;
Return value
Type | Description |
---|---|
UIImage | The UIImage object of the video screenshot. If nil is returned, the system fails to save the screenshot. |
API description
Take a screenshot from the video and save it to the specified location. This API method is not supported by P2P 1.0 devices.
- (UIImage *)snapShootSavedAtPath:(NSString *)filePath thumbnilPath:(NSString *)thumbnilPath;
Parameters
Parameter | Description |
---|---|
filePath | The path in which the screenshot is stored. |
thumbnilPath | The path to save the thumbnail. Set the value to nil if this parameter is not required. |
Return value
Type | Description |
---|---|
UIImage | The UIImage object of the video screenshot. If nil is returned, the system fails to save the screenshot. |
API description
The screenshot method of the video rendering view ThingSmartVideoType
only returns the UIImage
object without automatically saving the image.
- (UIImage *)screenshot;
Return value
Type | Description |
---|---|
UIImage | The UIImage object of the video screenshot. If nil is returned, the system fails to save the screenshot. |
Example
Objective-C:
- (void)snapShoot {
// Screenshots can be captured only during video playing.
if (self.previewing || self.playbacking) {
if ([self.camera snapShoot]) {
// Saves screenshots to the album of a mobile phone.
}
}
}
Swift:
func snapShoot() {
guard self.isPreviewing || self.isPlaybacking else {
return;
}
if let _ = self.camera.snapShoot() {
// Saves screenshots to the album of a mobile phone.
}
}
Before saving a screenshot to the phone’s album, make sure the app has permission to access the album. If not, the app may crash.
During live streaming or record playback, users can mute or unmute the video sound. The video is muted by default.
API description
Mute or unmute the video sound.
- (void)enableMute:(BOOL)mute forPlayMode:(ThingSmartCameraPlayMode)playMode;
Parameters
Parameter | Description |
---|---|
mute | Specify whether to mute the video.
|
playMode | The current playing mode. |
API description
The delegate callback for muting or unmuting the video sound.
- (void)camera:(id<ThingSmartCameraType>)camera didReceiveMuteState:(BOOL)isMute playMode:(ThingSmartCameraPlayMode)playMode;
Parameters
Parameter | Description |
---|---|
camera | The Camera object that mutes or unmutes the video sound. |
isMute | The current mute state. |
playMode | The current playing mode. |
API description
Switch between the speaker and earpiece modes. If the return value is not 0
, the switching operation fails. This API method is not supported by P2P 1.0 devices.
- (int)enableSpeaker:(BOOL)enabled;
Parameters
Parameter | Description |
---|---|
enabled |
|
API description
Get the current sound output mode, YES
for the speaker and NO
for the earpiece. This API method is not supported by P2P 1.0.
- (BOOL)speakerEnabled;
When the system switches between live streaming and video playback, the Smart Camera SDK does not retain the mute state from the previous mode.
Therefore, users need to manually adjust the sound settings after switching between playing modes.
API description
Customize the audio effect. If the return value is less than 0
, the setting failed.
- (int)setAudioEffectType:(ThingCameraAudioEffectType)audioEffectType;
Parameters
Parameter | Description |
---|---|
audioEffectType | The audio effect type. |
ThingCameraAudioEffectType
Enum value | Description |
---|---|
ThingCameraAudioEffectTypeNone | The original voice. |
ThingCameraAudioEffectTypeBrother | The voice of a boy. |
ThingCameraAudioEffectTypeUncle | The voice of an uncle. |
ThingCameraAudioEffectTypeRobot | The voice of a robot. |
ThingCameraAudioEffectTypeLolita | The voice of a girl. |
Get the live video bitrate after streaming starts.
API description
Get the live video bitrate.
- (double)getVideoBitRateKBPS;
After a P2P connection is created, users can initiate a talk to the visitor using the app. Before the talk, the app must be allowed to access the microphone of the mobile phone.
If the device is equipped with a speaker, it supports one-way talk. If the device is equipped with both a speaker and a pickup, it supports two-way talk. You can query device capabilities to check whether the device supports a speaker or pickup.
During live streaming, the audio from the video is the human voices and ambient sounds collected by the camera in real time. Turning on the audio channel between the app and the camera enables two-way talk.
Cameras without speakers or pickups do not support two-way talk.
The control of one-way talk is subject to your implementation. Mute the video when the one-way talk starts, and unmute it when the talk ends.
API description
Turn on the audio channel from the app to the camera.
- (void)startTalk;
API description
Turn off the audio channel from the app to the camera.
- (void)stopTalk;
API description
The delegate callback to be invoked when the audio channel from the app to the camera is turned on.
- (void)cameraDidBeginTalk:(id<ThingSmartCameraType>)camera;
API description
The delegate callback to be invoked when the audio channel from the app to the camera is turned off.
- (void)cameraDidStopTalk:(id<ThingSmartCameraType>)camera;
Example
The following example of one-way talk shows the API calls for the audio switch and live talk.
Objective-C:
- (void)startTalk {
[self.camera startTalk];
// Disables the audio channel if videos are not muted.
if (!self.isMuted) {
[self.camera enableMute:YES forPlayMode:ThingSmartCameraPlayModePreview];
}
}
- (void)stopTalk {
[self.camera stopTalk];
}
- (void)cameraDidBeginTalk:(id<ThingSmartCameraType>)camera {
// Starts live video talk.
}
- (void)cameraDidStopTalk:(id<ThingSmartCameraType>)camera {
// Stops live video talk.
// Enables the audio channel if videos are muted.
if (self.isMuted) {
[self.camera enableMute:NO forPlayMode:ThingSmartCameraPlayModePreview];
}
}
- (void)camera:(id<ThingSmartCameraType>)camera didReceiveMuteState:(BOOL)isMute playMode:(ThingSmartCameraPlayMode)playMode {
// Receives the audio status changes and updates the UI.
self.isMuted = isMute;
}
- (void)camera:(id<ThingSmartCameraType>)camera didOccurredErrorAtStep:(ThingCameraErrorCode)errStepCode specificErrorCode:(NSInteger)errorCode {
if (errStepCode == Thing_ERROR_START_TALK_FAILED) {
// Failed to start video talk. Enable the audio channel again.
if (self.isMuted) {
[self.camera enableMute:NO forPlayMode:ThingSmartCameraPlayModePreview];
}
}
else if (errStepCode == Thing_ERROR_ENABLE_MUTE_FAILED) {
// Failed to enable the muted state.
}
}
- (void)camera:(id<ThingSmartCameraType>)camera didOccurredErrorAtStep:(ThingCameraErrorCode)errStepCode specificErrorCode:(NSInteger)errorCode extErrorCodeInfo:(id<ThingSmartCameraExtErrorCodeInfo>)extErrorCodeInfo{
if (errStepCode == Thing_ERROR_START_TALK_FAILED) {
// Failed to start video talk. Enable the audio channel again.
if (self.isMuted) {
[self.camera enableMute:NO forPlayMode:ThingSmartCameraPlayModePreview];
}
}
else if (errStepCode == Thing_ERROR_ENABLE_MUTE_FAILED) {
// Failed to enable the muted state.
}
}
//Sets to the custom audio effect of a boy.
- (void)startBrotherAudioEffect {
[self.camera setAudioEffectType:ThingCameraAudioEffectTypeBrother];
}
//Stops the audio effect.
- (void)stopAudioEffect {
[self.camera setAudioEffectType:ThingCameraAudioEffectTypeNone];
}
Swift:
// The live video streaming mode.
func startTalk() {
self.camera.startTalk()
guard self.isMuted else {
self.camera.enableMute(true, for: .preview)
return
}
}
func stopTalk() {
self.camera.stopTalk()
}
func cameraDidBeginTalk(_ camera: ThingSmartCameraType!) {
// Starts live video talk.
}
func cameraDidStopTalk(_ camera: ThingSmartCameraType!) {
// Stops live video talk.
if self.isMuted {
self.camera.enableMute(false, for: .preview)
}
}
func camera(_ camera: ThingSmartCameraType!, didReceiveMuteState isMute: Bool, playMode: ThingSmartCameraPlayMode) {
self.isMuted = isMute
// Receives the audio status changes and updates the UI.
}
func camera(_ camera: ThingSmartCameraType!, didOccurredErrorAtStep errStepCode: ThingCameraErrorCode, specificErrorCode errorCode: Int) {
if errStepCode == Thing_ERROR_START_TALK_FAILED {
// Failed to start video talk. Enable the audio channel again.
self.camera.enableMute(false, for: .preview)
}else if errStepCode == Thing_ERROR_ENABLE_MUTE_FAILED {
// Failed to enable the muted state.
}
}
func camera(_ camera: ThingSmartCameraType!, didOccurredErrorAtStep errStepCode: ThingCameraErrorCode, specificErrorCode errorCode: Int, extErrorCodeInfo: ThingSmartCameraExtErrorCodeInfo!) {
if errStepCode == Thing_ERROR_START_TALK_FAILED {
// Failed to start video talk. Enable the audio channel again.
self.camera.enableMute(false, for: .preview)
}else if errStepCode == Thing_ERROR_ENABLE_MUTE_FAILED {
// Failed to enable the muted state.
}
}
//Sets to the custom audio effect of a boy.
func startBrotherAudioEffect() {
self.camera.setAudioEffect(ThingCameraAudioEffectType.brother)
}
//Stops the audio effect.
func stopAudioEffect() {
self.camera.setAudioEffect(ThingCameraAudioEffectType.none)
}
Users can switch between definitions during live streaming. Currently, only high definition (HD) and standard definition (SD) are available, and they are supported only for live streaming. Some cameras only support one definition. Only one video definition mode is supported for storing footage on the memory card.
API description
Get the video definition.
- (void)getDefinition;
API description
Set the video definition.
- (void)setDefinition:(ThingSmartCameraDefinition)definition;
API description
The delegate callback to be invoked when the video definition is changed.
- (void)camera:(id<ThingSmartCameraType>)camera definitionChanged:(ThingSmartCameraDefinition)definition;
Enum values of ThingSmartCameraDefinition
Value | Description |
---|---|
ThingSmartCameraDefinitionProflow | Bandwidth efficient streaming |
ThingSmartCameraDefinitionStandard | Standard definition (SD) |
ThingSmartCameraDefinitionHigh | High definition (HD) |
ThingSmartCameraDefinitionSuper | Ultra HD |
ThingSmartCameraDefinitionSSuper | Super ultra HD |
ThingSmartCameraDefinitionAudioOnly | Audio-only mode |
The supported definitions vary depending on the device. Ordinary devices currently only support SD and HD.
Example
Objective-C:
- (void)changeHD {
ThingSmartCameraDefinition definition = self.HD ? ThingSmartCameraDefinitionStandard : ThingSmartCameraDefinitionHigh;
[self.camera setDefinition:definition];
}
// The delegate callback to invoke after the video resolution is changed. It is also executed when live video streaming or video playback is started.
- (void)camera:(id<ThingSmartCameraType>)camera resolutionDidChangeWidth:(NSInteger)width height:(NSInteger)height {
// Returns the current definition mode.
[self.camera getDefinition];
}
// The delegate callback to invoke when the video definition mode is changed.
- (void)camera:(id<ThingSmartCameraType>)camera definitionChanged:(ThingSmartCameraDefinition)definition {
self.HD = definition >= ThingSmartCameraDefinitionHigh;
}
- (void)camera:(id<ThingSmartCameraType>)camera didOccurredErrorAtStep:(ThingCameraErrorCode)errStepCode specificErrorCode:(NSInteger)errorCode {
if (errStepCode == Thing_ERROR_ENABLE_HD_FAILED) {
// Failed to switch between the video definition modes.
}
}
- (void)camera:(id<ThingSmartCameraType>)camera didOccurredErrorAtStep:(ThingCameraErrorCode)errStepCode specificErrorCode:(NSInteger)errorCode extErrorCodeInfo:(id<ThingSmartCameraExtErrorCodeInfo>)extErrorCodeInfo{
if (errStepCode == Thing_ERROR_ENABLE_HD_FAILED) {
// Failed to switch between the video definition modes.
}
}
Swift:
func changeHD() {
let definition = self.isHD ? ThingSmartCameraDefinition.standard : ThingSmartCameraDefinition.high
self.camera.setDefinition(definition)
}
// The delegate callback to invoke after the video resolution is changed. It is also executed when live video streaming or video playback is started.
func camera(_ camera: ThingSmartCameraType!, resolutionDidChangeWidth width: Int, height: Int) {
// Returns the current definition mode.
self.camera.getDefinition()
}
func camera(_ camera: ThingSmartCameraType!, definitionChanged definition: ThingSmartCameraDefinition) {
self.isHD = definition.rawValue >= ThingSmartCameraDefinition.high.rawValue
}
func camera(_ camera: ThingSmartCameraType!, didOccurredErrorAtStep errStepCode: ThingCameraErrorCode, specificErrorCode errorCode: Int) {
if errStepCode == Thing_ERROR_ENABLE_HD_FAILED {
// Failed to switch between the video definition modes.
}
}
func camera(_ camera: ThingSmartCameraType!, didOccurredErrorAtStep errStepCode: ThingCameraErrorCode, specificErrorCode errorCode: Int, extErrorCodeInfo: ThingSmartCameraExtErrorCodeInfo!) {
if errStepCode == Thing_ERROR_ENABLE_HD_FAILED {
// Failed to switch between the video definition modes.
}
}
The Smart Camera SDK provides the callback that returns raw stream data, including the YUV data of video frames.
API description
The delegate callback for the raw stream data.
- (void)camera:(id<ThingSmartCameraType>)camera thing_didReceiveVideoFrame:(CMSampleBufferRef)sampleBuffer frameInfo:(ThingSmartVideoFrameInfo)frameInfo;
Parameters
Parameter | Description |
---|---|
camera | The Camera object that receives video data. |
sampleBuffer | The YUV data of video frames. |
frameInfo | The information about video frames. |
ThingSmartVideoFrameInfo
struct
Field | Type | Description |
---|---|---|
nWidth | int | The width of video images. |
nHeight | int | The height of video images. |
nFrameRate | int | The frame rate of the video. |
nTimeStamp | unsigned long long | The timestamp of a video frame. |
nDuration | unsigned long long | The total duration of the video attached to an alert, in milliseconds. |
nProgress | unsigned long long | The time point of a video frame in the video attached to an alert, in milliseconds. |
You can render video images with your own method, or further process video images. For this purpose, set the autoRender
property of the ThingSmartCameraType
object to NO
, and implement this delegate method. This way, the Smart Camera SDK does not automatically render video images.
You can force sampleBuffer
into CVPixelBufferRef
. To asynchronously process video frame data, remember to retain
the pixel buffer. Otherwise, the video frame data will be released after the delegate method is executed. This will cause a wild pointer exception during asynchronous processing.
Example
Objective-C:
- (void)camera:(id<ThingSmartCameraType>)camera thing_didReceiveVideoFrame:(CMSampleBufferRef)sampleBuffer frameInfo:(ThingSmartVideoFrameInfo)frameInfo {
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)sampleBuffer;
// Retains pixelBuffer to avoid unexpected release.
CVPixelBufferRetain(pixelBuffer);
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Processes and renders pixelBuffer.
// ...
// Releases pixelBuffer.
CVPixelBufferRelease(pixelBuffer);
});
}
Swift:
func camera(_ camera: ThingSmartCameraType!, thing_didReceiveVideoFrame sampleBuffer: CMSampleBuffer!, frameInfo: ThingSmartVideoFrameInfo) {
// Processes and renders video data.
}
When the camera detects an object in motion during live streaming, the intelligent video analytics (IVA) feature allows the object to be framed automatically in white on the view.
To achieve this purpose, the IVA feature must be enabled for the camera first. The device will then report the coordinates of the object along with the video frames. You can use the data point (DP) ID 198
(ipc_object_outline
) to enable this feature. For more information about device control API methods, see Device Control.
After IVA is enabled for the camera, this feature must also be enabled for the Smart Camera SDK during live streaming. This allows the SDK to frame the object in white on the view based on the received coordinates of the object.
The details of IVA DPs will not be described in this topic. You can follow the respective convention on the device.
API description
Enable or disable the IVA feature after startPreview
for video previewing or setDefinition:
for definition settings.
- (void)setOutLineEnable:(BOOL)enable;
Parameters
Parameter | Description |
---|---|
enable | Specifies whether to enable IVA. |
You can configure the properties of IVA to control its style, such as the frame color, brush width, and flash frequency.
API description
Define the properties of IVA in the specified format of JSON strings based on the Supplemental Enhancement Information (SEI) reported by the device.
- (int)setSmartRectFeatures:(NSString *)features;
Parameters
The parameter features
is a JSON string in the following format:
{
"SmartRectFeature":[
{
"type":0,
"index":0,
"brushWidth":1,
"flashFps":{
"drawKeepFrames":2,
"stopKeepFrames":2
},
"rgb":0xFF0000,
"shape":0
},
{
"type":0,
"index":1,
"brushWidth":2,
"flashFps":{
"drawKeepFrames":3,
"stopKeepFrames":2
},
"rgb":0x00FF00,
"shape":1
}
]
}
Parameter | Type | Description |
---|---|---|
SmartRectFeature | Array | (Required) The identifier in the fixed format of arrays to represent multiple frame settings. |
type | Int | The type of frame. Valid values:
|
index | Int | The index of the frame, corresponding to each ID in od of SEI. |
shape | Int | The shape of the rectangular frame. Valid values:
|
rgb | Int | The color of the rectangular frame, represented by the red-green-blue (RGB) color model. Value range: 0x000000 to 0xFFFFFF . Default value: 0xFC4747 . |
brushWidth | Int | The brush stroke of the rectangular frame. Valid values:
|
flashFps | String | The flash frequency of the rectangular frame. Valid values:
|
Define SEI protocol
The protocol that governs communication with the device. The Smart Camera SDK parses the data over SEI and implements IVA at the positions with the specified properties.
{
"AGTX":{
"time":6885,
"chn":0,
"key":1,
"iva":{
"od":[
{
"obj":{
"id":0,
"type":1,
"twinkle":1,
"rect":[
0,0,
25,25,
50,50,
80, 80,
100,100
],
"vel":[0,10],
"cat":"PEDESTRIAN"
}
},
{
"obj":{
"id":1,
"type":1,
"twinkle":1,
"rect":[
0,0,
100,100
],
"vel":[0,10],
"cat":"PEDESTRIAN"
}
}
]
}
}
}
The following table describes the parsed parameters in iva
reported by the device.
Parameter | Description |
---|---|
id | The index of each frame. |
type | The type of frame. Valid values:
|
twinkle | The flash setting. Valid values:
|
rect | Each pair of numbers represents the coordinates of a point. All points are arranged in clockwise order. The SDK draws these points into a closed polygon. Two points are used to represent a rectangular box. |
The coordinates of the rect
points in the SEI protocol are typically passed from the app to the device in the format of the device control data point (DP). Coordinate anchor data follows these rules:
An even-numbered position represents the numerator of a percentage value on the horizontal (x) axis.
An odd-numbered position represents the numerator of a percentage value on the vertical (y) axis.
The maximum value is 100, and even and odd positions appear in pairs.
After video decoding, rendered images can be further processed by settings. For example, the following features are supported: stretching or scaling, horizontal or vertical mirroring, and rotation by 90, 180, or 270 degrees.
API description
- (int)setDeviceFeatures:(NSDictionary *)features;
Parameters
The parameter features
is the directory type of data in the following format:
{
"DecPostProcess":{
"video":[
{
"restype":"4",
"oldres":"944*1080",
"newres":"1920*1080"
},
{
"restype":"2",
"oldres":"944*1080",
"newres":"1920*1080"
}
],
"mirror":0,
"rotation":2
}
}
Parameter | Description |
---|---|
DecPostProcess | (Required) The fixed identifier. |
video | The array of video resolution settings.
|
mirror | The mirroring setting. Valid values:
|
rotation | The angle of rotation. Valid values:
|
Use the phone’s camera features to perform tasks. For example, start/stop data collection, start/stop/pause/resume sending device data, and switch between the front and rear cameras.
Ensure the app is allowed to access the mobile phone’s camera. Failure to do so may result in the app crashing.
Initialize the phone’s camera and start capturing video.
API description
Turn on the phone’s camera and start capturing video. The returned int
determines the result of the operation. Invoke a callback though - (void)camera:(id<ThingSmartCameraType>)camera didReceiveLocalVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer localVideoInfo:(id<ThingSmartLocalVideoInfoType>)localVideoInfo;
.
Set videoInfo
to nil
. A custom value is not recommended as it may cause video capturing or playback to fail.
-(int)startLocalVideoCaptureWithVideoInfo:(nullable id<ThingSmartLocalVideoInfoType>)videoInfo;
ThingSmartLocalVideoInfoType
Parameter | Description |
---|---|
width | The width of the video. |
height | The height of the video. |
frameRate | The frame rate of the video. |
API description
Turn off the phone’s camera.
-(int)stopLocalVideoCapture;
API description
Switch between the front and rear cameras. The returned int
determines the result of the operation.
-(int)switchLocalCameraPosition;
See Video Rendering to customize the player properties.
- (void)camera:(id<ThingSmartCameraType>)camera didReceiveLocalVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer localVideoInfo:(id<ThingSmartLocalVideoInfoType>)localVideoInfo;
See Local video rendering view.
- (void)bindLocalVideoView:(UIView<ThingSmartVideoViewType> *)videoView;
- (void)unbindLocalVideoView:(UIView<ThingSmartVideoViewType> *)videoView;
After a P2P connection is created, start a video talk to send the local video to the device.
API description
- (int)startVideoTalk;
Callback description
- (void)cameraDidStartVideoTalk:(id<ThingSmartCameraType>)camera;
Stop a video call to disconnect the channel for transmitting local video to the device. This action does not terminate the P2P connection.
API description
-(int)stopVideoTalk;
Callback description
- (void)cameraDidStopVideoTalk:(id<ThingSmartCameraType>)camera;
Pause a video talk.
API description
- (int)pauseVideoTalk;
Callback description
- (void)cameraDidPauseVideoTalk:(id<ThingSmartCameraType>)camera;;
Resume a video talk.
API description
- (int)resumeVideoTalk;
Callback description
- (void)cameraDidResumeVideoTalk:(id<ThingSmartCameraType>)camera;
Start recording the video and audio.
API description
Set audioInfo
to nil
. A custom value is not recommended as it may cause capturing or playback to fail.
-(int)startAudioRecordWithAudioInfo:(nullable id<ThingSmartLocalAudioInfoType>)audioInfo;
Parameters
Parameter | Description |
---|---|
sampleRate | The sampling rate. |
channel | The sound channel. |
Stop recording the video and audio.
API description
-(int)stopAudioRecord;
The ThingSmartCameraAbility
class can be used to parse device configurations and get basic device capabilities.
Property | Description |
---|---|
defaultDefinition | The default definition of live streaming. |
videoNum | The number of streams supported by the device.
|
isSupportSpeaker | Indicates whether the device is equipped with a speaker. If so, the device supports video talk. |
isSupportPickup | Indicates whether the device is equipped with a pickup. If so, the audio channel can be enabled when video streams are previewed on the app. |
rowData | The raw data of P2P configurations. |
API description
Create the device capabilities class object based on the device data model. This API method is called after a P2P connection is created. After the initial P2P connection, the raw data of P2P configurations is cached in the local sandbox.
+ (instancetype)cameraAbilityWithDeviceModel:(ThingSmartDeviceModel *)deviceModel;
Is this page helpful?
YesFeedbackIs this page helpful?
YesFeedback