Capability Integration¶
This chapter introduces how to integrate the FIRERAP Remote Desktop and its related capabilities. You can integrate it into your own frontend pages for various functions such as operation and display. We will not cover the details here; you can refer to our APIFOX documentation for viewing and testing. However, we need to introduce some prerequisites and other basic information.
Hint
Prerequisites¶
To facilitate your interface testing, first, please ensure the device is connected to the current computer via USB and that login certificate verification (HTTPS) is disabled on the device. After completing these two steps, you also need to configure the Apifox platform. Since it involves WebSocket interfaces, you need to install the Apifox desktop client (not the web version) and import the relevant project into your client. We will not explain how to install or import it; please figure it out on your own.
Real-time Video¶
Real-time video is transmitted using WebSocket and supports two transmission formats: MJPEG (Motion JPEG) and H.264 NALU. Among them, MJPEG is the simplest to use. The actual content transmitted by MJPEG is a screenshot of the current device screen. When transmitted fast enough, it becomes a dynamic, real-time device screen. The only processing you need to do is to draw each frame message received from the WebSocket onto the screen as a JPEG image. The other option, H.264, has higher requirements for your foundational knowledge because you need to perform a decoding step before rendering it to the canvas. You can search for or integrate an existing H.264 decoder for decoding and drawing.
The difference between H.264 and MJPEG is that H.264 can reduce traffic by at least half and is faster. Of course, it is not suitable for all devices. Some devices may have low H.264 encoding performance, in which case you should use MJPEG for transmission. MJPEG also has its drawbacks; since each frame is a pure image, it has high bandwidth requirements.
Real-time Touch Control¶
Real-time touch control is also transmitted using WebSocket. There is nothing particularly special about it. You just need to send three types of operations—press, move, and release—in a specific format. You can do this using web events like mousedown and up. The main data transmitted consists of the event and coordinates. The only thing to note here is that you need to convert the coordinates based on the canvas and the actual screen size to calculate the corresponding coordinates on the actual screen from the user's actions on the canvas.
Key Operations¶
Key operations are a relatively simple part. You just need to send POST requests to the relevant interfaces in a specific format. Key operations allow you to control the device's navigation keys and perform standard English input.
Command Terminal¶
The command terminal uses WebSocket. You will need to integrate it using technologies like xterm.js. You just need to format the input and output into the specified format according to the API documentation and then send it or request output from xterm.