Wukong AI

Last Updated on : 2025-04-28 07:01:18download

Based on TuyaOS Wukong AI Hardware Development Framework, the IPC Development Framework has deeply integrated auditory and visual perception capabilities, delivering multimodal conversational capability. This capability can be deployed across multiple domains, including daily conversations, home security, elderly care, children’s toys, and children’s teaching aids.

Related files

tuya_ipc_ai_station.h

API description

Initialization

/**
 * @brief Initialize AI station
 *
 * @param[in] event_cb: ai event cb
 * @param[in] audio_cb: audio play cb
 *
 * @return OPRT_OK on success. Others on error, please refer to tuya_error_code.h
 */
OPERATE_RET tuya_ipc_ai_station_init(TUYA_IPC_AI_CMD_CB event_cb, TUYA_IPC_AI_AUDIO_PLAY_CB audio_cb);

Parameters

Parameter Description
event_cb The operation commands or status to be handled by you.
audio_cb The callback function for audio playback.

Start sending data

Start an AI conversation and send data. After the call, the audio and video data about 0.5 seconds before the current time will be taken from the ring buffer and sent to the cloud.

/**
 * @brief Start a conversation
 *
 * @param VOID
 *
 * @return OPRT_OK on success. Others on error, please refer to tuya_error_code.h
 */
OPERATE_RET tuya_ipc_ai_station_start_act();

Stop sending data

Stop the data sending of the AI ​​conversation. After the call, the device stops sending data to the cloud and starts receiving data returned by the cloud. The cloud data will be notified to you through TUYA_IPC_AI_CMD_CB event_cb, TUYA_IPC_AI_AUDIO_PLAY_CB audio_cb.

/**
 * @brief Stop a conversation
 *
 * @param VOID
 *
 * @return OPRT_OK on success. Others on error, please refer to tuya_error_code.h
 */
OPERATE_RET tuya_ipc_ai_station_stop_act();

Integration description

Single conversation mode

After one start and stop cycle, the system waits for all the cloud data to be returned before initiating the next conversation.

Continuous conversation mode

On-device audio detection algorithm determines whether someone is speaking by analyzing real-time audio inputs. In this case, it is necessary to address scenarios where AI conversations are interrupted and to pay special attention to the effectiveness of the on-device audio detection algorithms.