For more information, see Panel MiniApp > Set up environment.
Product name: Robot vacuum
A product defines the DPs of the associated panel and device. Before you develop a panel, you must create a laser robot vacuum product, define the required DPs, and then implement these DPs on the panel.
Register and log in to the Tuya Developer Platform and create a product.
🎉 After the above steps are completed, a robot vacuum product named Robot is created.
Register and log in to the Smart MiniApp Developer Platform. For more information, see Create panel miniapp.
Open Tuya MiniApp IDE and create a panel miniapp project based on the Sweep Robot Template. For more information, see Initialize project.
A panel miniapp template has already been initialized through the previous steps. The following section shows the project directories.
├── src
│ ├── api
│ │ └── ossApi.ts // APIs related to OSS map download
│ ├── components
│ │ ├── DecisionBar // Confirmation box component
│ │ ├── EmptyMap // Empty map component
│ │ ├── HistoryMapView // History map component
│ │ ├── HomeTopBar // Top bar component on the homepage
│ │ ├── IpcRecordTimer // IPC recording timer component
│ │ ├── IpcRecordTip // IPC recording prompt component
│ │ ├── Loading // Map loading component
│ │ ├── MapView // Live map component
│ │ ├── RoomNamePopLayout // Room naming dialog component
│ │ ├── RoomPreferencePopLayout // Room cleaning preference dialog component
│ │ ├── Selector // Selector component
│ │ ├── TopBar // General top bar component
│ ├── constant
│ │ ├── dpCodes.ts // dpCode constant
│ │ ├── index.ts // Stores all constant configurations
│ ├── devices // Device model
│ ├── hooks // Hooks
│ ├── i18n // Multilingual settings
│ ├── iconfont // IconFont file
│ ├── pages
│ │ ├── addTiming // The page to add a timer
│ │ ├── cleanRecordDetail // Cleaning record details page
│ │ ├── cleanRecords // Cleaning record list page
│ │ ├── doNotDisturb // DND page
│ │ ├── home // Homepage
│ │ ├── ipc // Video surveillance page
│ │ ├── manual // Manual control page
│ │ ├── mapEdit // Map editing page
│ │ ├── multiMap // Multi-map management page
│ │ ├── roomEdit // Room editing page
│ │ ├── setting // Setting page
│ │ ├── timing // Timer list page
│ │ ├── voicePack // Voice package page
│ ├── redux // redux
│ ├── res // Resources, such as pictures and SVG
│ ├── styles // Global style
│ ├── utils
│ │ ├── openApi // Map operation methods
│ │ ├── index.ts // Common utilities and methods
│ │ ├── ipc.ts // Utilities and methods related to IPC
│ │ ├── robotStatus.ts // Method for determining the status of the robot vacuum
│ ├── app.config.ts
│ ├── app.less
│ ├── app.tsx
│ ├── composeLayout.tsx // Handle and listen for the adding, unbinding, and DP changes of sub-devices
│ ├── global.config.ts
│ ├── mixins.less // Less mixins
│ ├── routes.config.ts //Configure routing
│ ├── variables.less // Less variables
├── typings // Define global types
├── webview // HTML file required by the map WebView
The robot vacuum is modularized and the underlying implementation is separated from the service calls, so you can focus more on the UI processing without worrying too much about processing other process logic. Currently, the robot vacuum panel mainly relies on the following packages:
@ray-js/robot-map-component
: Directly called by the service layer. This package provides full-screen map and dynamic map components, and exposes common methods of map operations.@ray-js/robot-data-stream
: Directly called by the service layer. This package encapsulates the P2P transmission method between the panel and the device. You can ignore the complex process of P2P communication and only need to focus on the business logic.@ray-js/robot-protocol
: Directly called by the service layer. This package provides a standard capability for complete protocol parsing, and encapsulates the parsing and encoding process of relatively complex raw type data points (DPs) in the protocol.@ray-js/webview-invoke
: Underlying dependency. This package provides the capability for the miniapp to communicate with the underlying SDK. You generally do not need to modify it.@ray-js/robot-middleware
: Underlying dependency. This package provides intermediate processing for the logic layer and WebView.@ray-js/hybrid-robot-map
: Underlying dependency. This basic SDK provides the rendering capabilities for the underlying layers.For general requirements of robot vacuums, you can focus on the application business logic and UI display without worrying about the implementation in the internal dependency package. Upgrades of the dependency packages will be backward compatible, and you can upgrade the dependency package separately in the project.
Displaying live maps is a core feature of a robot vacuum application. So, how to render the first map on the homepage?
@ray-js/robot-map-component
provides two types of map components:
The service layer template encapsulates the live map component. You can choose a full-screen component or dynamic component to suit scenario requirements.
The homepage is usually a live map. It is recommended to use the full-screen component, which can be imported as follows:
import MapView from "@/components/MapView";
// Add your custom logic here
return (
<MapView
isFullScreen
onMapId={onMapId}
onClickSplitArea={onClickSplitArea}
onDecodeMapData={onDecodeMapData}
onClickRoomProperties={onClickRoomProperties}
onClickMaterial={onClickMaterial}
onClickRoom={onClickRoom}
style={{
height: "75vh",
}}
/>
);
At this moment, you might have a question, how is the map and route data injected into this component?
The template encapsulates the @ray-js/robot-data-stream
tool library, which has built-in processes such as P2P initialization, connection establishment, data stream download, and destruction. You only need to call the useP2PDataStream
hook to get the map and route data transmitted from the robot vacuum in real time, provided that your robot vacuum supports P2P data transmission.
import { useP2PDataStream } from "@ray-js/robot-data-stream";
import { useMapData, usePathData } from "@/hooks";
// useMapData is a hook that handles and injects map data into <MapView />
const { onMapData } = useMapData();
// usePathData is a hook that handles and injects route data into <MapView />
const { onPathData } = usePathData();
useP2PDataStream(getDevInfo().devId, onMapData, onPathData);
If all goes well, you might have successfully seen the map on your phone. However, it is a little inconvenient if you need to scan the QR code with your phone to debug during the development stage. So, is there a way to display a live map on the IDE as well?
The IDE doesn't support P2P connections, but you can do this using the Robot Vacuum Debugger plugin. For detailed usage, refer to the documentation.
Cleaning is the most basic feature of a robot vacuum. The template has built-in 4 cleaning modes, including entire house, selected area, selected spot, and selected zone.
Among them, the selected area, spot, and zone modes involve changes in the map status.
/**
* Modify map status
* @param status The map status
*/
const setMapStatusChange = (status: number) => {
const { mapId } = store.getState().mapState;
const edit = status !== ENativeMapStatusEnum.normal; // When switching to selected area cleaning, freeze the map and prevent map updates
if (status === ENativeMapStatusEnum.mapClick) {
freezeMapUpdate(mapId, true);
}
// Restore the map when switching back
if (status === ENativeMapStatusEnum.normal) {
freezeMapUpdate(mapId, false);
}
setLaserMapStateAndEdit(mapId, { state: status, edit: edit || false });
};
/**
* Switch between cleaning modes
* @param modeValue
* @param mapStatus
*/
const handleSwitchMode = (modeValue: string, mapStatus: number) => {
const { mapId } = store.getState().mapState;
setMapStatus(mapStatus);
// Whether to switch to selected area mode
if (mapStatus === ENativeMapStatusEnum.mapClick) {
setLaserMapSplitType(mapId, EMapSplitStateEnum.click);
}
if (mapStatus === ENativeMapStatusEnum.normal) {
setLaserMapSplitType(mapId, EMapSplitStateEnum.normal);
}
// The "Pin N Go" feature. Instantly generate a moveable zone without clicking on the map.
if (mapStatus === 1) {
addPosPoints();
}
};
In the selected spot and zone modes, you can draw graphics on the map. The template encapsulates several hooks for drawing graphics on the map:
usePoseClean
(selected spot cleaning)useZoneClean
(selected zone cleaning)useForbiddenNoGo
(no-go area)useForbiddenNoMop
(no-mop area)useVirtualWall
(virtual wall)You can use usePoseClean
to add a movable point for spot cleaning on the map.
import { usePoseClean } from "@/hooks";
const { drawPoseCleanArea } = usePoseClean();
/**
* Add a movable point for spot cleaning
*/
const addPosPoints = async () => {
const { mapId } = store.getState().mapState;
drawPoseCleanArea(mapId);
};
After setting the cleaning mode, you need to send a command to start cleaning. In addition to necessary boolean DPs like switch_go
, you should pay more attention to raw type DPs like command_trans
that are used to inform the device of the graphics information for selected area, spot, and zone cleaning.
For more information about how to construct byte-type command data, you may have studied the Tuya Laser Robot Vacuum Protocol document. The template encapsulates the @ray-js/robot-protocol
protocol library to construct and parse command data, covering over 90% of commonly used command functionalities.
If you want to send a command for selected area cleaning and your product uses the 0x14(0x15) Room Clean protocol, you can use encodeRoomClean0x14
to construct the command data:
import { encodeRoomClean0x14 } from "@ray-js/robot-protocol";
import { useActions } from "@ray-js/panel-sdk";
const actions = useActions();
const roomCleanFunc = () => {
const { version, selectRoomData } = store.getState().mapState;
const data = encodeRoomClean0x14({
cleanTimes: 1,
// selectRoomData contains the room information thrown by MapView's onClickSplitArea after you select a room
roomHexIds: selectRoomData,
mapVersion: version,
});
actions[commandTransCode].set(data);
};
Similarly, if you find the robot vacuum in the selected area cleaning mode when opening the panel, you need to parse the selected area cleaning command reported by the device to know which rooms are being cleaned. You can use requestRoomClean0x15
and decodeRoomClean0x15
together.
// If you open the panel and find the robot vacuum in the selected area cleaning mode, you need to send a query command requesting the device to report specific command data.
actions[commandTransCode].set(
requestRoomClean0x15({ version: PROTOCOL_VERSION })
);
// When receiving data reported by the device, parse the selected area cleaning command
const roomClean = decodeRoomClean0x15({
// command contains the reported DP value from the device
command,
mapVersion,
});
if (roomClean) {
const { roomHexIds } = roomClean;
// After selectRoomData is updated, the rooms currently being cleaned in the selected area mode will be highlighted on the map
dispatch(updateMapData({ selectRoomData: roomHexIds }));
}
The multi-map management page shows all historical maps stored in the device. With the same data protocol and rendering method, historical maps and live maps have completely different data sources. Live map data comes from P2P transmission, while historical map data comes from cloud file downloads. To get multi-map data, see Multi-map APIs.
The template encapsulates multiMapsSlice
in Redux for querying multi-map data. You can refer to the relevant code. The template also encapsulates the HistoryMapView
component specifically for displaying historical maps.
import HistoryMapView from "@/components/HistoryMapView";
return (
<HistoryMapView
isFullScreen={false}
// bucket and file data comes from the getMultipleMapFiles API request
history={{
bucket,
file,
}}
/>
);
For using and deleting maps, use encodeUseMap0x2e
and encodeDeleteMap0x2c
provided by @ray-js/robot-protocol
.
import { encodeDeleteMap0x2c, encodeUseMap0x2e } from "@ray-js/robot-protocol";
const actions = useActions();
const handleDelete = () => {
actions[commandTransCode].set(encodeDeleteMap0x2c({ id }));
};
const handleUseMap = () => {
actions[commandTransCode].set(
encodeUseMap0x2e({
mapId,
url: file,
})
);
};
The way to import maps on the map editing page is similar to that on the homepage map, but some props have changed.
const uiInterFace = useMemo(() => {
return { isFoldable: true, isShowPileRing: true };
}, []);
<MapView
isFullScreen
// Temporary data for room settings
preCustomConfig={previewCustom}
// Forced room tag folding and showing charger warning ring (warning not to set restricted areas and virtual walls too close)
uiInterFace={uiInterFace}
onMapId={onMapId}
onLaserMapPoints={onLaserMapPoints}
onClickSplitArea={onClickSplitArea}
onMapLoadEnd={onMapLoadEnd}
// Do not display the route
pathVisible={false}
// No area selected
selectRoomData={[]}
/>;
Restricted areas are divided into no-go and no-mop zones. You can use the useForbiddenNoGo
and useForbiddenNoMop
hooks to create the corresponding areas.
If you want to save and send the created restricted area, you can use the encodeVirtualArea0x38
method to assemble the restricted area information into a DP command and send it.
import { useForbiddenNoGo, useForbiddenNoMop } from "@/hooks";
import { getMapPointsInfo } from "@/utils/openApi";
import { encodeVirtualArea0x38 } from "@ray-js/robot-protocol";
// Create a no-go area
const { drawOneForbiddenNoGo } = useForbiddenNoGo();
// Create a no-mop area
const { drawOneForbiddenNoMop } = useForbiddenNoMop();
// Save and send the restricted area
const handleSave = () => {
const { origin } = store.getState().mapState;
const { data } = await getMapPointsInfo(mapId.current);
const command = encodeVirtualArea0x38({
version: PROTOCOL_VERSION,
protocolVersion: 1,
virtualAreas: data.map((item) => {
return {
points: item.points,
mode: item.extend.forbidType === "sweep" ? 1 : 2,
name: item.content.text,
};
}),
origin,
});
actions[commandTransCode].set(command);
};
You can implement virtual wall functionality similar to restricted areas, and use useCreateVirtualWall
to create a virtual wall.
If you want to save and send the created virtual wall, you can use the encodeVirtualWall0x12
method to assemble the virtual wall information into a DP command and send it.
import { useCreateVirtualWall } from "@/hooks";
import { getMapPointsInfo } from "@/utils/openApi";
import { encodeVirtualWall0x12 } from "@ray-js/robot-protocol";
// Create a virtual wall
const { drawOneVirtualWall } = useCreateVirtualWall();
// Save and send the virtual wall
const handleSave = () => {
const { origin } = store.getState().mapState;
const { data } = await getMapPointsInfo(mapId.current);
const command = encodeVirtualWall0x12({
version: PROTOCOL_VERSION,
origin,
walls: data.map((item) => item.points),
});
actions[commandTransCode].set(command);
};
The floor material is associated with the room information. To set the floor material, you need to set the preCustomConfig
of MapView
.
When you enter the room, a pop-up window will appear for you to select the floor material. After you select the material, the status will be saved in the temporary previewCustom
.
After you save and confirm the room material, you can use the encodeSetRoomFloorMaterial0x52
method to convert the temporary floor material information into DP commands.
import { encodeSetRoomFloorMaterial0x52 } from "@ray-js/robot-protocol";
const [showFloorMaterialPopup, setShowFloorMaterialPopup] = useState(false);
const [previewCustom, setPreviewCustom] = useState<{
[key: string]: { roomId: number; floorMaterial: number };
}>({});
// Set floor material for a specific room
const handleFloorMaterialConfirm = (hexId: string) => {
const room = {
roomId: roomIdState.roomId,
floorMaterial: parseInt(hexId, 16),
};
const curRoom = {
[roomIdState.roomIdHex]: {
...room,
},
};
setPreviewCustom({ ...previewCustom, ...curRoom });
setShowFloorMaterialPopup(false);
};
// Save and send all the floor material information
const handleSave = () => {
const onConfirm = () => {
const rooms = Object.keys(previewCustom).map((roomIdHex: string) => {
const room = previewCustom[roomIdHex];
return {
roomId: room.roomId,
material: room.floorMaterial,
};
});
const command = encodeSetRoomFloorMaterial0x52({
version: PROTOCOL_VERSION,
rooms,
});
actions[commandTransCode].set(command);
};
};
return (
<View>
<MapView
isFullScreen
// Temporary data for room settings
preCustomConfig={previewCustom}
uiInterFace={uiInterFace}
onMapId={onMapId}
onLaserMapPoints={onLaserMapPoints}
onClickSplitArea={onClickSplitArea}
onMapLoadEnd={onMapLoadEnd}
pathVisible={false}
selectRoomData={[]}
/>
{/*
Pop-up window for selecting floor material
*/}
<FloorMaterialPopLayout
show={showFloorMaterialPopup}
onConfirm={handleFloorMaterialConfirm}
/>
</View>
);
The way to import maps on the room editing page is similar to that on the homepage, but some props have changed.
<MapView
isFullScreen
// Temporary data for room settings
preCustomConfig={previewCustom}
onMapId={onMapId}
onClickSplitArea={onClickSplitArea}
onSplitLine={onSplitLine}
onMapLoadEnd={onMapLoadEnd}
// Do not display the route
pathVisible={false}
// No area selected
selectRoomData={[]}
// Do not display the selected points, zones, restricted areas, and virtual wall information on the map
areaInfoList={[]}
/>
You can click Merge to merge adjoining rooms.
import { setMapStatusMerge } from "@/utils/openApi/mapStatus";
import { changeAllMapAreaColor } from "@/utils/openApi";
/**
* Enter the room merging state
*/
const handleMergeStatus = async () => {
// Set the map to the room merging state
setMapStatusMerge(mapId.current);
// Set all room colors to unselected state
changeAllMapAreaColor(mapId.current, true);
};
After selecting the two rooms to be merged, you can use encodePartitionMerge0x1e
to convert the room information into commands and send them.
import { getLaserMapMergeInfo } from "@/utils/openApi";
import { encodePartitionMerge0x1e } from "@ray-js/robot-protocol";
// Send room merging commands
const handleSave = () => {
const { version } = store.getState().mapState;
const res = await getLaserMapMergeInfo(mapId.current);
const { type, data } = res;
const roomIds = data.map((room) => parseRoomId(room.pixel, version));
const command = encodePartitionMerge0x1e({
roomIds,
version: PROTOCOL_VERSION,
});
actions[commandTransCode].set(command);
};
You can click Split to split a room.
import { setMapStatusSplit } from "@/utils/openApi/mapStatus";
/**
* Enter the room splitting state
*/
const handleSplitStatus = async () => {
// Set the map to the room splitting state
setMapStatusSplit(mapId.current);
};
After selecting a room and setting the required divider, you can use encodePartitionDivision0x1c
to convert the room splitting information into commands and send them.
import { getLaserMapSplitPoint } from "@/utils/openApi";
import { encodePartitionDivision0x1c } from "@ray-js/robot-protocol";
// Send room splitting commands
const handleSave = () => {
const { version } = store.getState().mapState;
const {
type,
data: [{ points, pixel }],
} = await getLaserMapSplitPoint(mapId.current);
const roomId = parseRoomId(pixel, version);
const command = encodePartitionDivision0x1c({
roomId,
points,
origin,
version: PROTOCOL_VERSION,
});
actions[commandTransCode].set(command);
};
You can click Name to name a room.
import { setMapStatusRename } from "@/utils/openApi/mapStatus";
/**
* Enter the room naming state
*/
const handleRenameStatus = async () => {
// Set the map to the room naming state
setMapStatusRename(mapId.current);
};
After you select a room and enter a name in the pop-up window, the temporary room naming information will be stored in the previewCustom
state. You can use encodeSetRoomName0x24
to convert the room naming information into DP commands and send them.
import { encodeSetRoomName0x24 } from "@ray-js/robot-protocol";
const [showRenameModal, setShowRenameModal] = useState(false);
const [previewCustom, setPreviewCustom] = useState({});
// Pop-up window to confirm the room name
const handleRenameConfirm = (name: string) => {
const room = previewCustom[roomHexId] || {};
const curRoom = {
[roomHexId]: {
...room,
name,
},
};
const newPreviewCustom = { ...previewCustom, ...curRoom };
setShowRenameModal(false);
setPreviewCustom(newPreviewCustom);
};
// Send room naming commands
const handleSave = () => {
const { version } = store.getState().mapState;
const keys = Object.keys(previewCustom);
const command = encodeSetRoomName0x24({
mapVersion: version,
version: PROTOCOL_VERSION,
rooms: keys.map((key) => {
return {
roomHexId: key,
name: previewCustom[key].name,
};
}),
});
actions[commandTransCode].set(command);
};
return (
<View>
<MapView
isFullScreen
// Temporary data for room settings
preCustomConfig={previewCustom}
onMapId={onMapId}
onClickSplitArea={onClickSplitArea}
onSplitLine={onSplitLine}
onMapLoadEnd={onMapLoadEnd}
selectRoomData={[]}
areaInfoList={[]}
pathVisible={false}
/>
<RoomNamePopLayout
show={showRenameModal}
onConfirm={handleRenameConfirm}
defaultValue=""
/>
</View>
);
You can click Order Room to sort a list of rooms.
import { setMapStatusOrder } from "@/utils/openApi/mapStatus";
/**
* Enter the room sorting state
*/
const handleMergeStatus = async () => {
// Set the map to the room sorting state
setMapStatusOrder(mapId.current);
};
After sorting all the rooms, you can use encodeRoomOrder0x26
to convert the room information into commands and send them.
import { getMapPointsInfo } from "@/utils/openApi";
import { encodeRoomOrder0x26 } from "@ray-js/robot-protocol";
// Send room sorting commands
const handleSave = () => {
const { version } = store.getState().mapState;
const { data } = await getMapPointsInfo(mapId.current);
const roomIdHexs = data
.sort((a: { order: number }, b: { order: number }) => a.order - b.order)
.map((item) => item.pixel);
const command = encodeRoomOrder0x26({
version: PROTOCOL_VERSION,
roomIdHexs,
mapVersion: version,
});
actions[commandTransCode].set(command);
};
Use the device_timer
DP to set the timer functionality.
Use decodeDeviceTimer0x31
to parse the timer DP into the timer list data.
import { decodeDeviceTimer0x31 } from "@ray-js/robot-protocol";
type TimerData = {
effectiveness: number;
week: number[];
time: {
hour: number;
minute: number;
};
roomIds: number[];
cleanMode: number;
fanLevel: number;
waterLevel: number;
sweepCount: number;
roomNum: number;
};
const [timerList, setTimerList] = useState<TimerData[]>([]);
const dpDeviceTimer = useProps((props) => props[deviceTimerCode]);
useEffect(() => {
if (dpDeviceTimer) {
const { list } = decodeDeviceTimer0x31({
command: dpDeviceTimer,
version: PROTOCOL_VERSION,
}) ?? { list: [] };
setTimerList(list);
}
}, [dpDeviceTimer]);
You can delete, enable, or disable the timers, and use encodeDeviceTimer0x30
to convert the new timer list into commands and send them.
import { encodeDeviceTimer0x30 } from "@ray-js/robot-protocol";
import produce from "immer";
type TimerData = {
effectiveness: number;
week: number[];
time: {
hour: number;
minute: number;
};
roomIds: number[];
cleanMode: number;
fanLevel: number;
waterLevel: number;
sweepCount: number;
roomNum: number;
};
const [timerList, setTimerList] = useState<TimerData[]>([]);
// Delete a timer
const deleteTimer = (index: number) => {
const newList = [...timerList];
newList.splice(index, 1);
const command = encodeDeviceTimer0x30({
list: newList,
version: PROTOCOL_VERSION,
number: newList.length,
});
actions[deviceTimerCode].set(command);
};
// Enable or disable a timer
const toggleTimer = (index: number, enable: boolean) => {
const newList = produce(timerList, (draft) => {
draft[index].effectiveness = enable;
});
const command = encodeDeviceTimer0x30({
list: newList,
version: PROTOCOL_VERSION,
number: newList.length,
});
actions[deviceTimerCode].set(command);
};
You can add a timer, and use encodeDeviceTimer0x30
to assemble the commands.
// Add a timer
const addTimer = (newTimer: TimerData) => {
const newList = [newTimer, ...timerList];
const command = encodeDeviceTimer0x30({
list: newList,
version: PROTOCOL_VERSION,
number: newList.length,
});
actions[deviceTimerCode].set(command);
};
Use the disturb_time_set
DP to set up the DND mode.
After setting the on/off, start time, and end time information, click Save to send the DND mode. You can use encodeDoNotDisturb0x40
to assemble the relevant information into a DP command.
import { encodeDoNotDisturb0x40 } from "@ray-js/robot-protocol";
// Add your custom logic here
// Save and send the DND mode information
const handleSave = () => {
const command = encodeDoNotDisturb0x40({
// Enable or disable the timer
enable,
// The start time (hour)
startHour,
// The start time (minute)
startMinute,
// The end time (hour)
endHour,
// The end time (minute)
endMinute,
});
actions[commandTransCode].set(command);
};
Similarly, you can use decodeDoNotDisturb0x41
to parse the DND mode DP reported by the device and present it on the page.
import { decodeDoNotDisturb0x41 } from "@ray-js/robot-protocol";
const dpDisturbTimeSet = useProps((props) => props[disturbTimeSetCode]);
// Parse the DND DP into structured data
const { enable, startHour, startMinute, endHour, endMinute } =
decodeDoNotDisturb0x41(dpDisturbTimeSet) ?? DEFAULT_VALUE;
// Add your custom logic here
To get cleaning records data, see Cleaning Records APIs.
The template has encapsulated cleanRecordsSlice
in Redux
to delete, modify, and query the cleaning records data. You can refer to the relevant code.
import {
deleteCleanRecord,
fetchCleanRecords,
selectCleanRecords,
} from "@/redux/modules/cleanRecordsSlice";
const records = useSelector(selectCleanRecords);
const handleDelete = (id: number) => {
dispatch(deleteCleanRecord(id));
};
useEffect(() => {
(dispatch as AppDispatch)(fetchCleanRecords());
}, []);
return (
<View className={styles.container}>
{records.map((record) => (
<Item key={record.id} data={record} onDeleted={handleDelete} />
))}
</View>
);
The details need to show the actual cleaning map and route. Similar to multi-map management, cleaning records also use historical maps, so the HistoryMapView
component is also used.
For more information about map import, refer to the following code snippet:
import HistoryMapView from "@/components/HistoryMapView";
return (
<HistoryMapView
// Use the full-screen map component
isFullScreen={true}
// bucket and file data comes from the getMultipleMapFiles API request
history={{
bucket,
file,
}}
pathVisible
/>
);
To get the voice package data, see Robot Voice APIs.
import { getVoiceList } from "@ray-js/ray";
type Voice = {
auditionUrl: string;
desc?: string;
extendData: {
extendId: number;
version: string;
};
id: number;
imgUrl: string;
name: string;
officialUrl: string;
productId: string;
region: string[];
};
const [voices, setVoices] = useState<Voice[]>([]);
useEffect(() => {
const fetchVoices = async () => {
const res = await getVoiceList({
devId: getDevInfo().devId,
offset: 0,
limit: 100,
});
setVoices(res.datas);
};
fetchVoices();
}, []);
return (
<View className={styles.container}>
{voices.map((voice) => (
<Item key={voice.id} data={voice} deviceVoice={deviceVoice} />
))}
</View>
);
The voice_data
DP is used for sending and reporting voice packages. You can use encodeVoice0x34
and decodeVoice0x35
provided by @ray-js/robot-protocol
to assemble and parse the DP data.
Send a command to use a voice package.
import { useActions } from "@ray-js/panel-sdk";
const actions = useActions();
const handleUse = () => {
actions[voiceDataCode].set(
encodeVoice0x34({
// The id, url, and md5 data all come from the Robot Voice APIs
id: extendData.extendId,
url: officialUrl,
md5: desc,
})
);
};
Parse the data reported by the voice package to get voice package information, download progress, and usage status.
import { useProps } from "@ray-js/panel-sdk";
const dpVoiceData = useProps((props) => props[voiceDataCode]);
const { languageId, status, progress } = decodeVoice0x35({
command: dpVoiceData,
});
To try out the voice package, see methods in Audio Capabilities.
As a general DP sending feature, manual control is implemented using the direction_control
DP.
The template has encapsulated simple manual control components and pages. For more information, see the src/pages/manual
page.
import React, { FC, useEffect } from "react";
import {
View,
navigateBack,
onNavigationBarBack,
setNavigationBarBack,
} from "@ray-js/ray";
import Strings from "@/i18n";
import { Dialog, DialogInstance } from "@ray-js/smart-ui";
import { useActions } from "@ray-js/panel-sdk";
import { directionControlCode, modeCode } from "@/constant/dpCodes";
import ManualPanel from "@/components/ManualPanel";
import styles from "./index.module.less";
const Manual: FC = () => {
const actions = useActions();
useEffect(() => {
ty.setNavigationBarTitle({
title: Strings.getLang("dsc_manual"),
});
// To enter the remote control, you need to send the manual mode
actions[modeCode].set("manual");
setNavigationBarBack({ type: "custom" });
onNavigationBarBack(async () => {
try {
await DialogInstance.confirm({
context: this,
title: Strings.getLang("dsc_tips"),
icon: true,
message: Strings.getLang("dsc_exit_manual_tips"),
confirmButtonText: Strings.getLang("dsc_confirm"),
cancelButtonText: Strings.getLang("dsc_cancel"),
});
actions[directionControlCode].set("exit");
setNavigationBarBack({ type: "system" });
setTimeout(() => {
navigateBack();
}, 0);
} catch (err) {
// do nothing
}
});
return () => {
setNavigationBarBack({ type: "system" });
};
}, []);
return (
<View className={styles.container}>
<ManualPanel />
<Dialog id="smart-dialog" />
</View>
);
};
export default Manual;
The template has a built-in Video Surveillance page.
For more information, see the IPC Generic Template tutorial.