The English version of this document was translated with the assistance of ChatGPT π.
This document will guide you step by step to understand what Sylvia-IoT Internet of Things platform is and then provide instructions on how to install and use it.
After getting a preliminary understanding of Sylvia-IoT and its usage, we will delve into its internal architecture to give you insights into the modules, operation principles, and design philosophy behind its performance optimization.
The development guide will explain how third-party developers or providers can develop their applications or network services. For those interested in core development with Sylvia-IoT, the guide also provides code structure and style.
Let's get started!
What is Sylvia-IoT?
Sylvia-IoT is an IoT (Internet of Things) platform primarily designed to forward device messages to applications or enable applications to send commands to devices.
The diagram above provides a simple explanation. Devices (such as sensors) are bound to specific communication modules and transmit data through network gateways or servers. Sylvia-IoT acts as a message broker, allowing each application to subscribe to devices they are interested in for data analysis or data transmission to the devices.
Features
Using the Sylvia-IoT platform provides several benefits to different providers:
- Device Providers:
- Easier module changes from network providers without altering applications.
- Network Providers:
- Focus on developing network communication protocols for device usage.
- Develop adapters to connect with the Sylvia-IoT platform.
- Application Providers:
- Specify any number of applications to receive data from the same device.
- Through Sylvia-IoT's communication protocol isolation, devices' network providers can be changed without rewriting code.
Concept
Sylvia-IoT provides HTTP APIs to manage the following entities:
- User Account:
- Access to Sylvia-IoT's management interface is possible through user accounts.
- Clients can obtain access tokens to access HTTP APIs.
- Client:
- Represents entities that access HTTP APIs.
- Third parties can develop management features for Sylvia-IoT through HTTP APIs.
- Users authorize clients to access resources using OAuth2.
- Unit:
- Each unit can have an owner and multiple members.
- Units can manage their own devices, networks, and applications.
- Device:
- Represents IoT terminal devices, such as sensors, trackers, and more.
- Application:
- Analyzes device data and presents it based on requirements, such as a smart home control center.
- Network:
- Connects different network servers to receive and send device data based on communication requirements.
- Common communication protocols include LoRa, WiFi, and TCP/IP.
- Network adapters can be developed to integrate existing network servers (e.g., TTN, ChirpStack) with Sylvia-IoT.
- Routing Rules:
- Associate devices with applications.
- Individual devices can be bound using network addresses or entire networks can be bound to specific applications.
- Supports many-to-many relationships, allowing multiple devices to be bound to one application or vice versa.
Communication Protocols
Currently, Sylvia-IoT supports the following protocols for message transmission between applications and networks:
- AMQP 0-9-1
- MQTT 3.1.1
Any message queuing model with explicit names (excluding wildcards) can be supported, such as AMQP 1.0, Apache Kafka, NATS, etc. However, topic publish/subscribe, broadcast, and multicast modes are currently not supported.
User Guide
Chapter Contents:
- Take you through a quick start to set up a functional Sylvia-IoT instance and simulate receiving device data.
- Provide complete configuration file content and overview.
Quick Start
This chapter describes the quick installation steps in the Ubuntu 22.04 environment.
The current executable is compiled using GLIBC 2.31 and can be executed on Ubuntu 22.04 or later OS versions. Older OS versions can use the Docker image. Related configurations and environment variables will be explained in the Configuration section.
Install Tools
sudo apt -y install curl jq
Install Docker
Refer to the installation steps on the Docker official website.
sudo apt -y install apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo usermod -aG docker $USER
Remember to restart the shell to apply user permissions.
Install MongoDBγRabbitMQγEMQX
Start the services (versions and data storage folders can be adjusted as needed):
export MONGODB_VER=7.0.9
export RABBITMQ_VER=3.13.2
export EMQX_VER=5.6.1
export MONGODB_DIR=$HOME/db/mongodb
export RABBITMQ_DIR=$HOME/db/rabbitmq
export EMQX_DIR=$HOME/db/emqx
mkdir -p $MONGODB_DIR
docker run --rm --name mongodb -d \
-p 27017:27017 \
-v $MONGODB_DIR:/data/db \
mongo:$MONGODB_VER
mkdir -p $RABBITMQ_DIR
docker run --rm --name rabbitmq -d \
-e RABBITMQ_NODENAME="rabbit@localhost" \
-p 5671:5671 -p 5672:5672 -p 15672:15672 \
-v $RABBITMQ_DIR:/var/lib/rabbitmq \
rabbitmq:$RABBITMQ_VER-management-alpine
mkdir -p $EMQX_DIR
docker run --rm --name emqx -d \
-e EMQX_LOADED_PLUGINS="emqx_dashboard|emqx_management|emqx_auth_mnesia" \
-e EMQX_LOADED_MODULES="emqx_mod_acl_internal,emqx_mod_presence,emqx_mod_topic_metrics" \
-p 1883:1883 -p 8883:8883 -p 18083:18083 \
-v $EMQX_DIR:/opt/emqx/data \
emqx/emqx:$EMQX_VER
The following information only introduces the plugins required by EMQX, which will not be used in the following demonstrations. You can also choose not to start EMQX at this stage.
Download Sylvia-IoT
curl -LO https://github.com/woofdogtw/sylvia-iot-core/releases/latest/download/sylvia-iot-core.tar.xz
curl -LO https://github.com/woofdogtw/sylvia-iot-core/releases/latest/download/sylvia-iot-coremgr-cli.tar.xz
curl -L -o config.json5 https://github.com/woofdogtw/sylvia-iot-core/raw/main/files/config.json5.example
tar xf sylvia-iot-core.tar.xz
tar xf sylvia-iot-coremgr-cli.tar.xz
Modify config.json5
For demonstration purposes, we make some modifications to the example config.json5:
- Since we are showcasing MongoDB here, we change all
"engine": "sqlite"
to"engine": "mongodb"
."db": { "engine": "mongodb", ... },
- We don't enable HTTPS for now, so the certificate file settings are commented out:
//"cacertFile": "/etc/ssl/certs/ca-certificates.crt", //"certFile": "/home/user/rust/conf/certs/sylvia-iot.crt", //"keyFile": "/home/user/rust/conf/certs/sylvia-iot.key",
- We create a folder to store static files, and in this example, it's
/home/user/static
."staticPath": "/home/user/static",
- We use the default login page template and comment out the example template:
"templates": { // Jinja2 template paths. //"login": "/home/user/rust/static/login.j2", //"grant": "/home/user/rust/static/grant.j2", },
- We use rumqttd instead of EMQX:
"coremgr": { ... "mq": { "engine": { "amqp": "rabbitmq", "mqtt": "rumqttd", }, ... }, ... },
Set Up Initial Data
First, let's enter the MongoDB shell:
docker exec -it mongodb mongosh
In the MongoDB shell interface, we create the basic data:
use test1
db.user.insertOne({
userId: 'admin',
account: 'admin',
createdAt: new Date(),
modifiedAt: new Date(),
verifiedAt: new Date(),
expiredAt: null,
disabledAt: null,
roles: {"admin":true,"dev":false},
password: '27258772d876ffcef7ca2c75d6f4e6bcd81c203bd3e93c0791c736e5a2df4afa',
salt: 'YsBsou2O',
name: 'Admin',
info: {}
})
db.client.insertOne({
clientId: 'public',
createdAt: new Date(),
modifiedAt: new Date(),
clientSecret: null,
redirectUris: ['http://localhost:1080/auth/oauth2/redirect'],
scopes: [],
userId: 'dev',
name: 'Public',
imageUrl: null
})
Then, press Ctrl+C
twice to exit.
Getting Started
Start Sylvia-IoT core:
./sylvia-iot-core -f config.json5
If the program doesn't terminate, it means the startup was successful π.
Open another command-line window and log in using the CLI:
./sylvia-iot-coremgr-cli -f config.json5 login -a admin -p admin
You will see the following screen (the content you see may be slightly different):
$ ./sylvia-iot-coremgr-cli -f config.json5 login -a admin -p admin
{
"access_token": "ef9cf7cfc645f9092b9af62666d903c5a8e4579ff6941b479c1d9c9b63b0b634",
"refresh_token": "265983a08af706fbe2912ff2edb1750311d1b689e4dab3a83c4b494c4cf2d033",
"token_type": "bearer",
"expires_in": 3599
}
OK (146 ms)
The access token will be automatically saved in the file $HOME/.sylvia-iot-coremgr-cli.json
. The
CLI will use the content of this file to access the APIs.
You can use ./sylvia-iot-coremgr-cli help
to inquire about the usage of commands.
Create Resources
For the convenience of using mosquitto CLI, we create the following entities:
- A unit with the code demo
- An MQTT application with the code test-app-mqtt
- An MQTT network with the code test-net-mqtt
- A device with the network address 01000461
- A route to bind the device to the application
During this process, you will need to change the connection password to password (the content you see may be slightly different):
UNIT_ID=$(./sylvia-iot-coremgr-cli -f config.json5 unit add -c demo -o admin -n 'Demo' | jq -r .unitId)
APP_ID=$(./sylvia-iot-coremgr-cli -f config.json5 application add -c test-app-mqtt -u $UNIT_ID --host 'mqtt://localhost' -n 'TestApp-MQTT' | jq -r .applicationId)
NET_ID=$(./sylvia-iot-coremgr-cli -f config.json5 network add -c test-net-mqtt -u $UNIT_ID --host 'mqtt://localhost' -n 'TestNet-MQTT' | jq -r .networkId)
./sylvia-iot-coremgr-cli -f config.json5 application update -i $APP_ID -p password
./sylvia-iot-coremgr-cli -f config.json5 network update -i $NET_ID -p password
DEV_ID=$(./sylvia-iot-coremgr-cli -f config.json5 device add -u $UNIT_ID --netid $NET_ID -a 01000461 -n 01000461 | jq -r .deviceId)
./sylvia-iot-coremgr-cli -f config.json5 device-route add -d $DEV_ID -a $APP_ID
Upload Device Data
You can install mosquitto CLI with the following command:
sudo apt -y install mosquitto-clients
Open a shell to subscribe to the application topic (format:
broker.application.[unit-code].[app-code].uldata
):
mosquitto_sub -u test-app-mqtt -P password -t broker.application.demo.test-app-mqtt.uldata
Open another shell to simulate the network system sending device data (topic format:
broker.network.[unit-code].[net-code].uldata
):
mosquitto_pub -u test-net-mqtt -P password -t broker.network.demo.test-net-mqtt.uldata -m '{"time":"2023-07-08T06:55:02.000Z","networkAddr":"01000461","data":"74657374"}'
At this point, you should see the following screen in the subscribed shell (the content may be slightly different):
$ mosquitto_sub -u test-app-mqtt -P password -t broker.application.demo.test-app-mqtt.uldata
{"dataId":"1688799672075-iJ4YQeQ5Lyv4","time":"2023-07-08T06:55:02.000Z","pub":"2023-07-08T07:01:12.075Z","deviceId":"1688798563252-aWcZVRML","networkId":"1688798370824-RwAbBDFh","networkCode":"test-net-mqtt","networkAddr":"01000461","isPublic":true,"profile":"","data":"74657374"}
If you see the data, congratulations! You have completed the basic use of Sylvia-IoT! (Congratulations! Achievement unlocked! π)
Configuration
This chapter describes the configuration format and usage of Sylvia-IoT.
Sylvia-IoT supports four sources of configuration, prioritized as follows (from highest to lowest):
- JSON5 configuration file
- Command-line parameters
- Environment variables
- Internal default values (may not exist; if required but not provided, an error message will be displayed)
You can refer to the sample JSON5 file for a complete list of configuration options. This chapter will provide corresponding explanations. The following conventions apply to the configuration:
- The nested structure in JSON5 is represented using
.
(dot). - For command-line parameters that encounter nested JSON5, the dot notation is also used.
- For command-line parameters corresponding to camelCase JSON5 properties, they will be written in
all lowercase or with
-
followed by lowercase. For example:- JSON5 property
server.httpPort
corresponds to--server.httpport
. - JSON5 property
broker.mqChannels
corresponds to--broker.mq-channels
.
- JSON5 property
- Environment variables are written in all uppercase.
- For environment variables that encounter nested JSON5,
_
(underscore) is used. - For environment variables corresponding to camelCase JSON5 properties, they will be written in all
uppercase or with
_
(underscore) separating words. For example:- JSON5 property
server.httpPort
corresponds toSERVER_HTTP_PORT
. - JSON5 property
broker.mqChannels
corresponds toBROKER_MQCHANNELS
.
- JSON5 property
Here are the complete table explanations:
If marked with Refer to example, it means that the sample JSON5 is provided or you can use the CLI help command to view the supported options.
Common Settings
JSON5 | CLI Parameters | Environment Variables | Default | Description |
---|---|---|---|---|
log.level | log.level | LOG_LEVEL | info | Log level. Refer to example |
log.style | log.style | LOG_STYLE | json | Log style. Refer to example |
server.httpPort | log.httpport | SERVER_HTTP_PORT | 1080 | HTTP listening port |
server.httpsPort | log.httpsport | SERVER_HTTPS_PORT | 1443 | HTTPS listening port |
server.cacertFile | log.cacertfile | SERVER_CACERT_FILE | HTTPS root certificate file location | |
server.certFile | log.certfile | SERVER_CERT_FILE | HTTPS certificate file location | |
server.keyFile | log.keyfile | SERVER_KEY_FILE | HTTPS private key file location | |
server.staticPath | log.static | SERVER_STATIC_PATH | Static files directory location |
Detailed Explanation
- Root certificate is not currently used.
- Both certificate and private key must be used simultaneously to enable HTTPS service.
API Scopes
All APIs require access through registered clients and access tokens. Each token is associated with a specific client, and only authorized clients can access the APIs.
When a particular API is configured with apiScopes
settings, the access token must include the
relevant scopes enabled by the client during registration and authorized by the user to access that
API.
Both command-line parameters and environment variables should be provided as JSON strings. For example:
--auth.api-scopes='{"auth.tokeninfo.get":[]}'
You can define custom scope names and apply them to various API scopes. You can refer to the example provided in the Authentication Service section for more details.
Authentication Service (auth)
JSON5 | CLI Parameters | Environment Variables | Default | Description |
---|---|---|---|---|
auth.db.engine | auth.db.engine | AUTH_DB_ENGINE | sqlite | Database type |
auth.db.mongodb.url | auth.db.mongodb.url | AUTH_DB_MONGODB_URL | mongodb://localhost:27017 | MongoDB connection URL |
auth.db.mongodb.database | auth.db.mongodb.database | AUTH_DB_MONGODB_DATABASE | auth | MongoDB database name |
auth.db.mongodb.poolSize | auth.db.mongodb.poolsize | AUTH_DB_MONGODB_POOLSIZE | Maximum number of MongoDB connections | |
auth.db.sqlite.path | auth.db.sqlite.path | AUTH_DB_SQLITE_PATH | auth.db | SQLite file location |
auth.db.templates.login | auth.db.templates | AUTH_TEMPLATES | Login page template file location | |
auth.db.templates.grant | auth.db.templates | AUTH_TEMPLATES | Authorization page template file location | |
auth.db.apiScopes | auth.api-scopes | AUTH_API_SCOPES | API scope settings |
Detailed Explanation
- Templates:
- These are the web pages required for the OAuth2 authorization code grant flow. sylvia-iot-auth provides default pages, but Sylvia-IoT allows you to customize web pages to match your own style.
- The templates use the Jinja2 format (dependent on the tera package).
- Both command-line parameters and environment variables should use JSON strings. For example:
--auth.templates='{"login":"xxx"}'
- For more details, please refer to OAuth2 Authentication.
- API scopes
- The auth module provides the following scopes that can be configured for corresponding
APIs to limit the scope of access for clients:
auth.tokeninfo.get
: Authorize clients to read token data.GET /api/v1/auth/tokeninfo
auth.logout.post
: Authorize clients to log out tokens.POST /auth/api/v1/auth/logout
user.get
: Authorize clients to access current user's profile data.GET /api/v1/user
user.path
: Authorize clients to modify current user's profile data.PATCH /api/v1/user
user.get.admin
: Authorize clients to access data of all system users.GET /api/v1/user/count
GET /api/v1/user/list
GET /api/v1/user/{userId}
user.post.admin
: Authorize clients to create new users in the system.POST /api/v1/user
user.patch.admin
: Authorize clients to modify data of any system user.PATCH /api/v1/user/{userId}
user.delete.admin
: Authorize clients to delete data of any system user.DELETE /api/v1/user/{userId}
client.get
: Authorize clients to access data of all system clients.GET /api/v1/client/count
GET /api/v1/client/list
GET /api/v1/client/{clientId}
client.post
: Authorize clients to create new clients in the system.POST /api/v1/client
client.patch
: Authorize clients to modify data of any system client.PATCH /api/v1/client/{clientId}
client.delete
: Authorize clients to delete data of any system client.DELETE /api/v1/client/{clientId}
client.delete.user
: Authorize clients to delete all clients of any system user.DELETE /api/v1/client/user/{userId}
- For example, in your service, you define the following scopes:
api.admin
: Only authorize the removal of all clients of a user.api.rw
: Allow read and write access to all APIs exceptDELETE /api/v1/client/user/{userId}
.api.readonly
οΌ Only allow access to GET APIs.- Allow access to token data and log out actions for all clients.
"auth": { ... "apiScopes": { "auth.tokeninfo.get": [], "auth.logout.post": [], "user.get": ["api.rw", "api.readonly"], "user.patch": ["api.rw"], "user.post.admin": ["api.rw"], "user.get.admin": ["api.rw", "api.readonly"], "user.patch.admin": ["api.rw"], "user.delete.admin": ["api.rw"], "client.post": ["api.rw"], "client.get": ["api.rw", "api.readonly"], "client.patch": ["api.rw"], "client.delete": ["api.rw"], "client.delete.user": ["api.admin"], }, ... }
- In this example, registered clients can freely select these three scopes. Subsequently, users will be informed of this information on the authorization page and decide whether to grant authorization to the client.
- The auth module provides the following scopes that can be configured for corresponding
APIs to limit the scope of access for clients:
Message Broker Service (broker)
JSON5 | CLI Parameters | Environment Variables | Default | Description |
---|---|---|---|---|
broker.auth | broker.auth | BROKER_AUTH | http://localhost:1080/auth | Authentication service URL |
broker.db.engine | broker.db.engine | BROKER_DB_ENGINE | sqlite | Database type |
broker.db.mongodb.url | broker.db.mongodb.url | BROKER_DB_MONGODB_URL | mongodb://localhost:27017 | MongoDB connection URL |
broker.db.mongodb.database | broker.db.mongodb.database | BROKER_DB_MONGODB_DATABASE | auth | MongoDB database name |
broker.db.mongodb.poolSize | broker.db.mongodb.poolsize | BROKER_DB_MONGODB_POOLSIZE | Maximum number of MongoDB connections | |
broker.db.sqlite.path | broker.db.sqlite.path | BROKER_DB_SQLITE_PATH | auth.db | SQLite file location |
broker.cache.engine | broker.cache.engine | BROKER_CACHE_ENGINE | none | Cache type |
broker.cache.memory.device | broker.cache.memory.device | BROKER_CACHE_MEMORY_DEVICE | 1,000,000 | Memory cache size for devices |
broker.cache.memory.deviceRoute | broker.cache.memory.device-route | BROKER_CACHE_MEMORY_DEVICE_ROUTE | 1,000,000 | Memory cache size for device routes |
broker.cache.memory.networkRoute | broker.cache.memory.network-route | BROKER_CACHE_MEMORY_NETWORK_ROUTE | 1,000,000 | Memory cache size for network routes |
broker.mq.prefetch | broker.mq.prefetch | BROKER_MQ_PREFETCH | 100 | Maximum number of AMQP consumers |
broker.mq.persistent | broker.mq.persistent | BROKER_MQ_PERSISTENT | false | Persistent message delivery for AMQP producers |
broker.mq.sharedPrefix | broker.mq.sharedprefix | BROKER_MQ_SHAREDPREFIX | $share/sylvia-iot-broker/ | MQTT shared subscription prefix |
broker.mqChannels.unit.url | broker.mq-channels.unit.url | BROKER_MQCHANNELS_UNIT_URL | amqp://localhost | Unit control message host |
broker.mqChannels.unit.prefetch | broker.mq-channels.unit.prefetch | BROKER_MQCHANNELS_UNIT_PREFETCH | 100 | Maximum number of AMQP consumers for unit control messages |
broker.mqChannels.application.url | broker.mq-channels.application.url | BROKER_MQCHANNELS_APPLICATION_URL | amqp://localhost | Application control message host |
broker.mqChannels.application.prefetch | broker.mq-channels.application.prefetch | BROKER_MQCHANNELS_APPLICATION_PREFETCH | 100 | Maximum number of AMQP consumers for application control messages |
broker.mqChannels.network.url | broker.mq-channels.network.url | BROKER_MQCHANNELS_NETWORK_URL | amqp://localhost | Network control message host |
broker.mqChannels.network.prefetch | broker.mq-channels.network.prefetch | BROKER_MQCHANNELS_NETWORK_PREFETCH | 100 | Maximum number of AMQP consumers for network control messages |
broker.mqChannels.device.url | broker.mq-channels.device.url | BROKER_MQCHANNELS_DEVICE_URL | amqp://localhost | Device control message host |
broker.mqChannels.device.prefetch | broker.mq-channels.device.prefetch | BROKER_MQCHANNELS_DEVICE_PREFETCH | 100 | Maximum number of AMQP consumers for device control messages |
broker.mqChannels.deviceRoute.url | broker.mq-channels.device-route.url | BROKER_MQCHANNELS_DEVICE_ROUTE_URL | amqp://localhost | Device route control message host |
broker.mqChannels.deviceRoute.prefetch | broker.mq-channels.device-route.prefetch | BROKER_MQCHANNELS_DEVICE_ROUTE_PREFETCH | 100 | Maximum number of AMQP consumers for device route control messages |
broker.mqChannels.networkRoute.url | broker.mq-channels.network-route.url | BROKER_MQCHANNELS_NETWORK_ROUTE_URL | amqp://localhost | Network route control message host |
broker.mqChannels.networkRoute.prefetch | broker.mq-channels.network-route.prefetch | BROKER_MQCHANNELS_NETWORK_ROUTE_PREFETCH | 100 | Maximum number of AMQP consumers for network route control messages |
broker.mqChannels.data.url | broker.mq-channels.data.url | BROKER_MQCHANNELS_DATA_URL | Data message host | |
broker.mqChannels.data.persistent | broker.mq-channels.data.persistent | BROKER_MQCHANNELS_DATA_PERSISTENT | false | Persistent delivery for data messages |
broker.db.apiScopes | broker.api-scopes | BROKER_API_SCOPES | API scope settings |
Detailed Explanation
-
The purpose of specifying the Authentication Service URL (
broker.auth
) is to verify the legitimacy of API calls, including user accounts and clients. -
MQ channels:
- As the Sylvia-IoT Message Broker Service is a critical module that determines performance,
many configurations are stored in memory. These configurations need to be propagated to
various instances of the cluster through Control Channel Messages via message queues when
API changes are made.
- For relevant details, please refer to the Cache chapter.
- data represents the Data Channel Message, which records all data into the
sylvia-iot-data module.
- If no parameters are specified (or JSON5 is set to null), no data will be stored.
- For relevant details, please refer to the Data Flow chapter.
- As the Sylvia-IoT Message Broker Service is a critical module that determines performance,
many configurations are stored in memory. These configurations need to be propagated to
various instances of the cluster through Control Channel Messages via message queues when
API changes are made.
-
API scopes: Please refer to the explanation in the Authentication Service section.
Core Manager Service (coremgr)
JSON5 | CLI Parameters | Environment Variables | Default | Description |
---|---|---|---|---|
coremgr.auth | coremgr.auth | COREMGR_AUTH | http://localhost:1080/auth | Authentication service URL |
coremgr.broker | coremgr.broker | COREMGR_BROKER | http://localhost:2080/broker | Message broker service URL |
coremgr.mq.engine.amqp | coremgr.mq.engine.amqp | COREMGR_MQ_ENGINE_AMQP | rabbitmq | AMQP type |
coremgr.mq.engine.mqtt | coremgr.mq.engine.mqtt | COREMGR_MQ_ENGINE_MQTT | emqx | MQTT type |
coremgr.mq.rabbitmq.username | coremgr.mq.rabbitmq.username | COREMGR_MQ_RABBITMQ_USERNAME | guest | RabbitMQ administrator account |
coremgr.mq.rabbitmq.password | coremgr.mq.rabbitmq.password | COREMGR_MQ_RABBITMQ_PASSWORD | guest | RabbitMQ administrator password |
coremgr.mq.rabbitmq.ttl | coremgr.mq.rabbitmq.ttl | COREMGR_MQ_RABBITMQ_TTL | RabbitMQ default message TTL (seconds) | |
coremgr.mq.rabbitmq.length | coremgr.mq.rabbitmq.length | COREMGR_MQ_RABBITMQ_LENGTH | RabbitMQ default maximum number of messages in queues | |
coremgr.mq.rabbitmq.hosts | coremgr.mq.rabbitmq.hosts | COREMGR_MQ_RABBITMQ_HOSTS | (Reserved) | |
coremgr.mq.emqx.apiKey | coremgr.mq.emqx.apikey | COREMGR_MQ_EMQX_APIKEY | EMQX management API key | |
coremgr.mq.emqx.apiSecret | coremgr.mq.emqx.apisecret | COREMGR_MQ_EMQX_APISECRET | EMQX management API secret | |
coremgr.mq.emqx.hosts | coremgr.mq.emqx.hosts | COREMGR_MQ_EMQX_HOSTS | (Reserved) | |
coremgr.mq.rumqttd.mqttPort | coremgr.mq.rumqttd.mqtt-port | COREMGR_MQ_RUMQTTD_MQTT_PORT | 1883 | rumqttd MQTT port |
coremgr.mq.rumqttd.mqttsPort | coremgr.mq.rumqttd.mqtts-port | COREMGR_MQ_RUMQTTD_MQTTS_PORT | 8883 | rumqttd MQTTS port |
coremgr.mq.rumqttd.consolePort | coremgr.mq.rumqttd.console-port | COREMGR_MQ_RUMQTTD_CONSOLE_PORT | 18083 | rumqttd management API port |
coremgr.mqChannels.data.url | coremgr.mq-channels.data.url | COREMGR_MQCHANNELS_DATA_URL | Data message host | |
coremgr.mqChannels.data.persistent | coremgr.mq-channels.data.persistent | COREMGR_MQCHANNELS_DATA_PERSISTENT | false | Persistent delivery for data messages |
Detailed Explanation
- MQ channels:
- data represents the Data Channel Message.
- Currently, coremgr supports recording HTTP request content for all API requests except GET. Enabling the data channel will record the API usage history.
- If no parameters are specified (or JSON5 is set to null), no data will be stored.
- data represents the Data Channel Message.
Core Manager Command-Line Interface (coremgr-cli)
JSON5 | CLI Parameters | Environment Variables | Default | Description |
---|---|---|---|---|
coremgrCli.auth | coremgr-cli.auth | COREMGRCLI_AUTH | http://localhost:1080/auth | Authentication service URL |
coremgrCli.coremgr | coremgr-cli.coremgr | COREMGRCLI_COREMGR | http://localhost:3080/coremgr | Core manager service URL |
coremgrCli.data | coremgr-cli.data | COREMGRCLI_DATA | http://localhost:4080/data | Data service URL |
coremgrCli.clientId | coremgr-cli.client-id | COREMGRCLI_CLIENT_ID | CLI client ID | |
coremgrCli.redirectUri | coremgr-cli.redirect-uri | COREMGRCLI_REDIRECT_URI | CLI client redirect URI |
Data Service (data)
JSON5 | CLI Parameters | Environment Variables | Default | Description |
---|---|---|---|---|
data.auth | data.auth | DATA_AUTH | http://localhost:1080/auth | Authentication service URL |
data.broker | data.broker | DATA_BROKER | http://localhost:2080/broker | Message broker service URL |
data.db.engine | data.db.engine | DATA_DB_ENGINE | sqlite | Database type |
data.db.mongodb.url | data.db.mongodb.url | DATA_DB_MONGODB_URL | mongodb://localhost:27017 | MongoDB connection URL |
data.db.mongodb.database | data.db.mongodb.database | DATA_DB_MONGODB_DATABASE | data | MongoDB database name |
data.db.mongodb.poolSize | data.db.mongodb.poolsize | DATA_DB_MONGODB_POOLSIZE | Maximum number of MongoDB connections | |
data.db.sqlite.path | data.db.sqlite.path | DATA_DB_SQLITE_PATH | data.db | SQLite file location |
data.mqChannels.broker.url | data.mq-channels.broker.url | DATA_MQCHANNELS_BROKER_URL | amqp://localhost | Data message host |
data.mqChannels.broker.prefetch | data.mq-channels.broker.prefetch | DATA_MQCHANNELS_BROKER_PREFETCH | 100 | Maximum number of AMQP consumers for data messages |
data.mqChannels.broker.sharedPrefix | data.mq-channels.broker.sharedprefix | DATA_MQCHANNELS_BROKER_SHAREDPREFIX | $share/sylvia-iot-data/ | MQTT shared subscription prefix |
data.mqChannels.coremgr.url | data.mq-channels.coremgr.url | DATA_MQCHANNELS_COREMGR_URL | amqp://localhost | Data message host |
data.mqChannels.coremgr.prefetch | data.mq-channels.coremgr.prefetch | DATA_MQCHANNELS_COREMGR_PREFETCH | 100 | Maximum number of AMQP consumers for data messages |
data.mqChannels.coremgr.sharedPrefix | data.mq-channels.coremgr.sharedprefix | DATA_MQCHANNELS_COREMGR_SHAREDPREFIX | $share/sylvia-iot-data/ | MQTT shared subscription prefix |
Internal Architecture
Chapter Contents:
- Detailed explanation of Sylvia-IoT components.
- Understanding the process of uplink and downlink data.
- Introduction to caching mechanisms.
Architecture
Here is the diagram of Sylvia-IoT components. In this chapter, we will explain each one in detail.
Sylvia-IoT Core Components
Abbreviated as ABCD (laughs π)
Auth (sylvia-iot-auth)
- Purpose
- Provides the validity and information of access tokens for HTTP APIs, allowing APIs to determine whether to authorize access with the token.
- Offers the authorization mechanism for OAuth2, currently supporting the following flows:
- Authorization code grant flow
- Clients need to use a webview to display login and authorization pages.
- Currently used by coremgr CLI.
- Client credentials grant flow
- Currently reserved and not actively used.
- Authorization code grant flow
- Managed entities
- User accounts
- User's basic information.
- Permissions (roles).
- Clients
- Access permissions (scopes) for HTTP APIs.
- User accounts
- Dependencies
- None. It can operate independently.
Broker (sylvia-iot-broker)
- Purpose
- Manages entities related to devices.
- Binds devices and applications, forwards device data to applications, or receives data from applications to devices.
- (Optional) Can send all traffic passing through networks and application data via the data channel to the Data service for storage or analysis.
- Managed entities
- Units
- Composed of one owner and multiple members.
- Independently manage devices, applications, networks, and route (binding) rules.
- Applications
- Analyze data and present results based on device data.
- Networks
- Can use services directly connected to Sylvia-IoT or connect existing network services (e.g., The Things Network (TTN) or ChirpStack) to Sylvia-IoT using adapters.
- One network address can be used to transmit data from one device.
- Administrators (admin role) can create public networks.
- Devices
- Each device represents an application on an endpoint, such as a tracker, meter, sensor, etc.
- Devices need to be attached to a network address under a network to transmit data.
- Devices can be attached to public networks, but it requires administrator accounts (admin/manager roles) to set up.
- Each device has a unique identifier (device ID). If the application relies on this identifier, even if the network and address are changed, there is no need to change the application's management.
- Each device can be assigned a device profile based on the data content.
- With the profile, applications can quickly parse data without the need to create a mapping table for identifiers.
- Route rules
- Binds devices to applications.
- Many-to-many relationships are supported.
- Binds networks to applications, and all devices under that network will be routed, meaning
there is no need to bind them one by one.
- Many-to-many relationships are supported.
- Public networks cannot be bound.
- Binds devices to applications.
- Units
- Dependencies
- Depends on Auth service.
Coremgr (sylvia-iot-coremgr)
- Purpose
- Coremgr, short for Core Manager, is responsible for managing the core components of Sylvia-IoT.
- Provides the main HTTP APIs for external direct access.
- The Auth service only exposes authentication/authorization APIs. User and client management still requires the use of coremgr API.
- Uses bridging to indirectly access Auth and Broker HTTP APIs to manage various entities.
- Creates queues and corresponding permissions using the management API for RabbitMQ/EMQX, and
other message brokers.
- Broker only manages associations between entities and AMQP/MQTT connections. The actual configuration of RabbitMQ/EMQX is performed by coremgr.
- (Optional) Sends operation records, including additions, modifications, deletions, etc., through the data channel to the Data service for storage or analysis.
- Managed entities
- (None)
- Dependencies
- Depends on Auth and Broker services.
- Depends on the management API of the message broker.
Coremgr CLI (sylvia-iot-coremgr-cli)
- Purpose
- Provides a command-line interface (CLI) for users to configure Sylvia-IoT using commands.
- Dependencies
- Depends on Auth and Coremgr services. Auth is only used for authentication/authorization.
- Can depend on the Data service to read historical data.
Control Channel vs. Data Channel
- The control channel is used to transmit messages related to entity management (users, units,
devices, etc.). It can be categorized as follows:
- Unicast: Each message has only one consumer and is used for Sylvia-IoT to push messages to networks or applications, which will be explained in the later Data Flow section.
- Broadcast: Used to broadcast messages to various core processes within the Sylvia-IoT cluster, which will be explained in the later Cache section.
- The data channel is used to transmit device data or historical data.
- It covers Application Data, Network Data, and Coremgr OP Data.
- Currently, AMQP 0-9-1 and MQTT 3.1.1 protocols are implemented. Additionally, AMQP 1.0, Kafka, NATS, and other protocols can also be implemented.
general-mq
Sylvia-IoT utilizes general-mq to implement unicast and broadcast, abstracting the details of communication protocols.
By implementing unicast/broadcast modes for AMQP 1.0, Kafka, or other protocols in general-mq and corresponding management APIs in coremgr, Sylvia-IoT can support more protocols.
Data (sylvia-iot-data)
- Purpose
- Record or analyze data from the data channel.
This module is unique in that it does not have a specific implementation. Currently, sylvia-iot-data in Sylvia-IoT Core provides storage and retrieval of raw data.
Below are some possible scenarios for extension:
- Rule engine.
- Since the data channel contains all network data, the Data module can be implemented as a common rule engine in IoT platforms.
- Stream processing.
- The data channel can be implemented as a Kafka queue for stream processing.
Message Brokers
Here, message brokers refer to services like RabbitMQ and EMQX, not Sylvia-IoT Broker. Unless specifically mentioned, "Broker" in this document refers to Sylvia-IoT Broker.
Some important points:
- Since coremgr needs to configure queues through the management APIs, relevant implementations must
be provided to support this feature. Currently, coremgr supports the following message brokers:
- RabbitMQ
- EMQX
- In the future, Kafka or other protocols can be implemented to broaden the application scope of Sylvia-IoT.
- Sylvia-IoT has the following requirements:
- Message queuing, which refers to the traditional message pattern (one message has only one
consumer).
- MQTT is implemented using shared subscription.
- Publish/Subscribe, used for broadcasting control channel messages. This will be covered in the
Cache section.
- AMQP is implemented using fanout exchanges and temporary queues.
- Message queuing, which refers to the traditional message pattern (one message has only one
consumer).
rumqttd
In the Quick Start section, we used sylvia-iot-core as an example. This executable includes the complete Auth/Broker/Coremgr/Data and rumqttd.
To make it possible to run in resource-constrained environments, sylvia-iot-core contains the rumqttd MQTT broker. By configuring it to use SQLite as the database and MQTT for message delivery, the sylvia-iot-core achieves the full functionality of Sylvia-IoT in just two files.
The "core" is an executable that contains all the complete functionalities, whereas "coremgr" only contains management functionalities and does not include rumqttd.
To accommodate this limited environment, Sylvia-IoT adopts rumqttd. Currently, Sylvia-IoT does not have an implementation of rumqttd management APIs, so it is not suitable for use in a cluster architecture. It is also not recommended to use this mode for queue permission requirements.
Third-Party Components
Application Servers, Network Servers
In addition to using the data channel to send and receive device data, applications and networks can also access Sylvia-IoT HTTP APIs and control channel messages to build their own management systems.
Devices
Devices in Sylvia-IoT refer to narrow-definition terminal devices that only process the data required by applications, and they are generally bound to network modules. The network module, however, can be interchangeable.
Here's an example of replacing the network module: Suppose the device uses a Raspberry Pi to connect sensors for specific application development. The network part can be changed to different protocols at any time (e.g., switching from LoRa to WiFi or even using an Ethernet cable). In Sylvia-IoT, you only need to modify the corresponding network and address settings for the device.
Data Flow
This chapter introduces how Sylvia-IoT handles data flow, including the following scenarios:
- Uplink data: Data sent from devices to applications.
- Downlink data: Data sent from applications to devices.
- Control channel: Messages transmitted from Broker to the network.
- Coremgr operation data: Records of system operation history, including management operations.
Uplink Data
When device data is sent to the corresponding queue through the network service, the data will be processed and sent to the application as follows:
- If the data format is correct, it will proceed to the next step; otherwise, it will be discarded.
- Broker first sends the data directly to the Data module (via the queue) to store the complete uplink data content.
- Scan all device routes and perform the following actions:
- Send the data to the corresponding application queue.
- Store the data sent to the application in the Data module.
- Scan all network routes and perform the following actions:
- Check if the data has already been sent during the device route stage. If yes, move to the next network route action; if not, continue with the following actions:
- Send the data to the corresponding application queue.
- Store the data sent to the application in the Data module.
The purpose of the comparison in Step 4 is to avoid duplicate sending when device routes and network routes overlap.
Downlink Data
When the application service sends data to be delivered to a device through the queue, the data is processed as follows:
- If the format is correct, proceed to the next step; otherwise, respond with an error message
through the
resp
queue. - Check if the destination device belongs to the specified unit. If it does, proceed to the next
step; otherwise, respond with an error message through the
resp
queue. - Assign a unique identifier (ID) to this data as an independent entry and store it in the Data module.
- Store the ID and the source application of this data in the database to facilitate reporting the delivery status back to the application service in the future.
- Send the data (including the data ID) to the queue of the corresponding network service.
- If the data is sent to the network service queue, report the data ID back to the application service to track the delivery status.
Compared to uplink data, downlink data is slightly more complex, mainly because reporting the delivery status is required.
The Broker does not retain the resp
queue for the network service to report data correctness. This
is because the Broker, being part of the infrastructure, always ensures data correctness.
The network service only needs to focus on delivering the data to the device and reporting the final
result. Even if the data sent by the Broker is invalid, the network service can directly report it
through the result
queue.
After processing the data (regardless of success or failure), the network service MUST use the data ID to report back to the Broker in the following order:
- If the format is correct, proceed to the next step; otherwise, discard the message.
- Submit a request to the Data module for result updates using the ID.
- Retrieve the application service information associated with that ID and report the result back to the application service that sent this downlink data (ensuring that other applications will not receive the result).
- If step 3 is successful, clear the ID information from the database.
The use of an additional ID database aims to retain the source application of the downlink data. After all, if data is sent by application A, why should application B receive the result π?
Control Channel
The Broker or coremgr provides APIs that allow the network service to update device data at any
time. However, relying on periodic API requests for synchronization is inefficient and may impact
routing performance due to frequent requests.
The Broker provides a mechanism that when there are changes in device data, information is provided
to the corresponding network service through broker.network.[unit-code].[network-code].ctrl
.
Sylvia-IoT allows devices to change their associated networks or addresses. When this operation occurs, the network service will receive the following messages based on different scenarios:
- Changing from network A to network B:
- Notify network A that a specific address has been removed.
- Notify network B that a specific address has been added.
- Changing the address within network A:
- Notify network A that a specific address has been removed.
- Notify network A that a specific address has been added.
Operation Data
Coremgr has an optional configuration to store all system operation logs (limited to coremgr HTTP APIs, of course). The current scope includes POST/PUT/PATCH/DELETE, etc.
As shown in the diagram, after each API operation, coremgr records the following data:
- Request time
- Response time
- Processing time
- HTTP status
- Source IP address
- HTTP method
- (Optional) HTTP request body
- The content of
data.password
is filtered. When the request contains apassword
field, its content is cleared. The key is retained to indicate that this request involves password modification.
- The content of
- User ID
- Client ID
Cache
In the Data Flow section, it is mentioned that the main task of the Broker is "to match routing rules and forward data". Typically, routing rules are stored in a database, so the speed of matching becomes a critical bottleneck. This is especially true when forwarding thousands or even tens of thousands of data at the same time, putting significant pressure on the database.
As it is well known, one of the best solutions to alleviate pressure on the database is caching, and Redis is a popular solution for this purpose.
Sylvia-IoT has been designed from the beginning to be as simple as possible and to adopt the minimum possible variety of technologies (you can run the complete Sylvia-IoT functionality with just SQLite and MQTT). Regarding caching, it uses an in-process-memory approach, which means storing data in variables within the process itself. The matching process does not require network or IPC as it directly accesses variables within the process.
Currently, the Broker is implemented using
std::collections::HashMap
.
The diagram above provides an overview of Sylvia-IoT's caching mechanism. To meet the requirements of a cluster architecture, a broadcast queue is introduced to implement the Control Channel.
To ensure data accuracy, updates are first made to the database before updating the cache. Below, we outline the steps:
- Users modify routing rules through the HTTP API.
- Similar to a regular API implementation, the database is directly updated.
- Before responding to the HTTP request, an update message is sent to the control channel, containing necessary update information (optional details like names are excluded).
- While responding to the HTTP request, the control channel broadcasts the update message to all processes in the cluster.
- Upon receiving the message, each process updates the content of its variables.
For simplicity, the current implementation mostly involves deleting cache data (the content of step 3 is a deletion action) and then filling it with cache-miss.
Let's discuss a few special situations:
- The caching design of the Broker adopts "eventual consistency." After step 3, there might be a short period during which the old routing is still in use. However, this period is usually not very long (within tens or hundreds of milliseconds, or perhaps even shorter).
- To avoid data inconsistency, when a process detects a reconnection to the control channel's queue, it completely clears the cache content. It then reads the data from the database during a cache-miss event.
In the Configuration File section, the
mqChannels
contains various settings for the control channel corresponding to each API.
Relying on variables within the process as caching allows Sylvia-IoT Broker to achieve efficient forwarding capabilities π.
Developer's Guide
Chapter Contents:
- OAuth2 authentication process.
- Developing network services.
- Developing application services.
- Developing and contributing to the Sylvia-IoT core.
OAuth2 Authentication
Sylvia-IoT HTTP APIs require obtaining an access token through OAuth2 for access. The following scenarios require using OAuth2 authentication and obtaining access tokens:
- Accessing Sylvia-IoT HTTP API.
- Developing network and application services that need to integrate with sylvia-iot-auth for user account and token authentication.
sylvia-iot-auth provides basic login and authorization pages, and this chapter will also describe how to develop custom pages as needed.
Before Getting Started
Before proceeding, you need to create the first user account and client. In the Quick Start guide, we created the following resources:
- User account: name is admin, and the password is admin.
- Client: ID is public, and the redirect URI is http://localhost:1080/auth/oauth2/redirect.
You can use coremgr-cli to obtain the token using the above information. If you want to create your own user account and client, you can do so using the CLI, or you can follow the details below.
- For user accounts, the password is hashed using a combination of salt and PBKDF2 encryption, with 10000 iterations. Replace salt and password with your specified salt and hashed password, respectively. Other fields can also be replaced with your specified values.
- For clients, replace
clientId
andredirectUri
. The redirect URI should be set to the client's address. If your service needs to be accessed through http://localhost or https://network.example.com, and it receives the authorization code at the path /network/redirect, you can set the redirect URI as["http://localhost/network/redirect","https://network.example.com/network/redirect"]
.
Using Browser and Curl
Here, we will explain how to log in with your account credentials and obtain a session ID to access the authorization page and obtain the token. The following examples use the user account and client created in the Quick Start guide.
Open your browser and enter the URL http://localhost:1080/auth/oauth2/auth?response_type=code&redirect_uri=http%3A%2F%2Flocalhost%3A1080%2Fauth%2Foauth2%2Fredirect&client_id=public
Enter your account credentials. If you are redirected to the authorization page, it means you have successfully logged in. The page will display the API scopes required by this client. If you agree, click the Accept button. After that, the URL in the browser will look like the following (your content will be slightly different):
http://localhost:1080/auth/oauth2/redirect?code=62a801a7d6ceaf2d1018cbac60a6b3d1744295016214bfec6214397d73368278
The code
in the URL is the authorization code. You need to use the curl command within 30 seconds
to obtain the token:
curl -X POST http://localhost:1080/auth/oauth2/token -d 'grant_type=authorization_code&code=62a801a7d6ceaf2d1018cbac60a6b3d1744295016214bfec6214397d73368278&redirect_uri=http%3A%2F%2Flocalhost%3A1080%2Fauth%2Foauth2%2Fredirect&client_id=public'
If you see the following message, it means you have obtained the token (your content will be slightly different):
{"access_token":"fecc5af17e254e6c5a561b7acc900c8f0449a42e77f07a19261c2e6cff518ec8","refresh_token":"5905fc23f65ca7ed92bc7be74e33fc3e79cd8bce2c9ef2ef1bb368caaf6c07f0","token_type":"bearer","expires_in":3599,"scope":""}
Using Curl
If you want to use the curl command to assist with your program development, you can follow these steps. First, use the following command to log in and obtain the session ID:
curl -v -X POST http://localhost:1080/auth/oauth2/login -d 'state=response_type%3Dcode%26client_id%3Dpublic%26redirect_uri%3Dhttp%253A%252F%252Flocalhost%253A1080%252Fauth%252Foauth2%252Fredirect&account=admin&password=admin'
If you see the response like this (your content will be slightly different):
< HTTP/1.1 302 Found
< content-length: 0
< access-control-allow-credentials: true
< location: /auth/oauth2/authorize?response_type=code&client_id=public&redirect_uri=http%3A%2F%2Flocalhost%3A1080%2Fauth%2Foauth2%2Fredirect&session_id=6643a450b4d678f7d0223fde9e118a2733f1958aa3fc55d616ec278e83d7a06a
< vary: Origin, Access-Control-Request-Method, Access-Control-Request-Headers
< access-control-expose-headers: location
< date: Sat, 15 Jul 2023 04:25:21 GMT
Keep the content of session_id
from the location field and use it in the next HTTP request
within 60 seconds:
curl -v -X POST http://localhost:1080/auth/oauth2/authorize -d 'allow=yes&session_id=6643a450b4d678f7d0223fde9e118a2733f1958aa3fc55d616ec278e83d7a06a&client_id=public&response_type=code&redirect_uri=http%3A%2F%2Flocalhost%3A1080%2Fauth%2Foauth2%2Fredirect'
If you see the response like this (your content will be slightly different):
< HTTP/1.1 302 Found
< content-length: 0
< access-control-allow-credentials: true
< location: http://localhost:1080/auth/oauth2/redirect?code=eee02ae34b6c93f955ebf244bccec2b7e6534e1a8dc451a2ed92a790be7b14bb
< vary: Origin, Access-Control-Request-Method, Access-Control-Request-Headers
< access-control-expose-headers: location
< date: Sat, 15 Jul 2023 04:40:36 GMT
The code
in the location field is the authorization code. You need to use the curl command
within 30 seconds to obtain the token:
curl -X POST http://localhost:1080/auth/oauth2/token -d 'grant_type=authorization_code&code=eee02ae34b6c93f955ebf244bccec2b7e6534e1a8dc451a2ed92a790be7b14bb&redirect_uri=http%3A%2F%2Flocalhost%3A1080%2Fauth%2Foauth2%2Fredirect&client_id=public'
If you see the following message, it means you have obtained the token (your content will be slightly different):
{"access_token":"6994982614dc9f6f2bff08169f7636873531686c34c02fbd6bb45655c8f24b13","refresh_token":"387822850a8fa9a474c413b62a17d9f218204ddcaad51ca475448827b83972fe","token_type":"bearer","expires_in":3599,"scope":""}
Authentication Flow Endpoints
Here are the endpoints involved in the OAuth2 authentication flow:
GET /auth/oauth2/auth
- Verifies the client's basic information and redirects to the next endpoint if successful.
- Query parameters:
response_type
: Must becode
.client_id
: Client identifier.redirect_uri
: The redirect URI where the authorization code will be received.scope
: (Optional) The requested scope of access.state
: (Optional) Will be included when receiving the authorization code. Generally used to retain the previous page information for returning after login.
GET /auth/oauth2/login
- Displays the account login page.
- Query parameters will be automatically populated from the previous step.
state
: (Auto-generated)
- Pressing the login button will trigger the next HTTP request.
POST /auth/oauth2/login
- Logs in with the account username and password and redirects to the next endpoint if successful.
- HTTP body parameters:
account
: Account username.password
: Password.- Since plaintext is used, it is recommended to use HTTPS and a trusted browser component (webview).
state
: Content of the state from the previous step.
GET /auth/oauth2/authorize
- Authenticates the client parameters and session ID, and displays the client's permission requirements.
- Query parameters will be automatically populated from the previous step.
- (Same as
GET /auth/oauth2/auth
) session_id
: The session ID for the current login process. Currently reserved for 60 seconds.
- (Same as
- Pressing the Allow or Deny button will trigger the next HTTP request.
POST /auth/oauth2/authorize
- Authenticates the client and generates the authorization code. The endpoint will redirect to the address specified by the client whether successful or failed.
- HTTP body parameters:
- (Same as
GET /auth/oauth2/authorize
query) allow
:yes
indicates approval, while others indicate rejection.
- (Same as
- Redirect parameters:
code
: The authorization code. This content must be used in the next HTTP request within 30 seconds.
POST /auth/oauth2/token
- Authenticates the client information and authorization code, and generates the access token.
- HTTP body parameters:
grant_type
: Must beauthorization_code
.code
: The value of the authorization code.redirect_uri
: The redirect URI of the client.client_id
: Client identifier.
- Response content:
access_token
: The access token to access the Sylvia-IoT HTTP APIs.refresh_token
: Used to obtain a new token when the access token expires.token_type
:bearer
.expires_in
: Expiration time in seconds.scope
: Access scope.
POST /auth/oauth2/refresh
- Obtains a new access token using the refresh token.
- HTTP body parameters:
grant_type
: Must berefresh_token
.refresh_token
: The value of the refresh token.scope
: (Optional) The requested scopes of access.client_id
: (Optional) Client identifier.
- Response content: Same as the response content of
POST /auth/oauth2/token
.
Developing Your Own Templates
You can refer to the original version
of the templates and pay attention to the Jinja2 variables to be preserved within {{ }}
.
For the account login page, please reserve the following variables:
scope_path
: This will determine the endpoint to send thePOST /login
request when the "Login" button is clicked.- The default for Sylvia-IoT is
SCHEME://SERVER_HOST/auth
, whereSCHEME://SERVER_HOST
corresponds to the information from theGET /auth
endpoint.
- The default for Sylvia-IoT is
state
: WhenGET /auth
is successful, sylvia-iot-auth generates the state content and includes it in the template.
For the client authorization page, please reserve the following variables:
scope_path
: This will determine the endpoint to send thePOST /authorize
request when the "Login" button is clicked.- The default for Sylvia-IoT is
SCHEME://SERVER_HOST/auth
, whereSCHEME://SERVER_HOST
corresponds to the information from thePOST /login
endpoint.
- The default for Sylvia-IoT is
- Other parameters should be referred to as described in the
GET /auth/oauth2/authorize
endpoint section.
You can choose to implement the login or authorization web page content and provide the following parameters in the Configuration File:
auth.db.templates.login
: The file path to the login page template.auth.db.templates.grant
: The file path to the authorization page template.
Network Services
This chapter provides a brief overview of key points in developing network services, including:
- Data Channel
- Control Channel
- Using the SDK to connect channels in Rust
Before starting this chapter, please make sure you have read and understood the Data Flow section, and are familiar with the generation and consumption timing of queues and related data.
Queues and Data Formats
-
This document defines the data content of the queues between the Broker and network services.
-
Both data and control channels use unicast mode.
- AMQP properties:
- durable: true
- exclusive: false
- auto-delete: false
- ttl: determined when generating the network
- max-length: determined when generating the network
- MQTT properties:
- QoS: 1 at the Broker side
- clean session: true at the Broker side
- AMQP properties:
-
In the Data Flow section, it is mentioned that network services need to retain
dataId
while processing downlink data for subsequent result reporting.- For unreported downlink data, it has no impact on the Broker.
- Currently retained for one day. If not reported, it will always be marked as "unreported".
- The application service can decide how to handle downlink data that has not been reported for too long.
- For unreported downlink data, it has no impact on the Broker.
-
Rules regarding
result
:- Less than 0 indicates the ongoing process.
- -2: Indicates that the data is being sent to the network service. Set by the Broker before storing in the database.
- -1: Indicates that the network service has received it. Must be reported back as -1 via
the
result
queue by the network service.
- 0 or positive values indicate the completion of processing. At this point, it will be removed
from the dldata database, and any further reports cannot be sent back to the application side.
- All reported by the network service.
- 0: Successfully sent to the device or the device responded successfully.
- Positive values: Unable to send to the device or the device responded with an error.
As the results are currently defined by the network service, the application side still needs to know which network the device is currently bound to. It is recommended to follow the above rules when developing network services to make the presentation on the application side more consistent.
- Less than 0 indicates the ongoing process.
Rust and Using the SDK
For Rust developers, there is an SDK available to assist in developing network services. Usage examples can be found in the Appendix chapter. Here are a few tips on how to use it:
- Channel maintenance is handled in the
mq
module'sNetworkMgr
. - One
NetworkMgr
corresponds to one network service. - Only manage
NetworkMgr
; no need to manually manage the connection status of all queues and AMQP/MQTT properties. - Register an
EventHandler
to receive real-time updates when the queue status changes or data is delivered. - You can use
send_uldata()
andsend_dldata_result()
to send data to the Broker.
Application Services
This chapter provides a brief overview of key points in developing network services, including:
- Data Channel
- Using the SDK to connect channels in Rust
Before starting this chapter, please make sure you have read and understood the Data Flow section, and are familiar with the generation and consumption timing of queues and related data.
Queues and Data Formats
- This document defines the data content of the queues between the Broker and application services.
- Data channels use unicast mode.
- AMQP properties:
- durable: true
- exclusive: false
- auto-delete: false
- ttl: determined when generating the application
- max-length: determined when generating the application
- MQTT properties:
- QoS: 1 at the Broker side
- clean session: true at the Broker side
- AMQP properties:
- In the Data Flow chapter, it is mentioned that when downlink data is sent
to the Broker through the
dldata
queue, the Broker will immediately report the result.- The
correlationId
is recommended to be unique. If the application service simultaneously sends a large amount of downlink data, this correlation ID will be used to track whether each transmission has been correctly sent to the network service. - If the data is successfully processed, the
dataId
will be returned. The application service can use this data ID to track the processing status of this downlink data in the network service.
- The
- In the downlink data, you can choose to specify the destination device using either the
deviceId
or the combination ofnetworkCode
andnetworkAddr
.- If the device is on the public network, you must use the
deviceId
. Sylvia-IoT adopts this approach to prevent application services from sending data arbitrarily to devices that do not belong to their own unit.
- If the device is on the public network, you must use the
- Currently, the control channel is not supported. Changes to devices must rely on application services to request the Sylvia-IoT HTTP APIs or manage the list of devices themselves.
Rust and Using the SDK
For Rust developers, there is an SDK available to assist in developing application services. Usage examples can be found in the Appendix chapter. Here are a few tips on how to use it:
- Channel maintenance is handled in the
mq
module'sApplicationMgr
. - One
ApplicationMgr
corresponds to one application service. - Only manage
ApplicationMgr
; no need to manually manage the connection status of all queues and AMQP/MQTT properties. - Register an
EventHandler
to receive real-time updates when the queue status changes or data is delivered. - You can use
send_dldata()
to send data to the Broker.
Sylvia-IoT Core
If you are interested in Sylvia-IoT and would like to develop core functionalities (that is, ABCD π), this chapter will introduce some code structures and important considerations.
Directory Structure
Here, we explain the directory and file arrangement structure for the various components of Sylvia-IoT.
[project]/
βββ doc/
β βββ api.md
β βββ cache.md
β βββ message.md
β βββ schema.md
βββ src/
β βββ bin/
β β βββ [project].rs
β β βββ [bin1].rs
β β βββ [bin2].rs
β β βββ ...
β βββ libs/
β β βββ config..rs
β β βββ [lib1]/
β β βββ [lib2].rs
β β βββ ...
β βββ models/
β β βββ [engine1]/
β β β βββ [table1].rs
β β β βββ [table2].rs
β β β βββ ...
β β βββ [engine2]/
β β β βββ [table1].rs
β β β βββ [table2].rs
β β β βββ ...
β β βββ [table1].rs
β β βββ [table2].rs
β β βββ ...
β βββ routes/
β βββ v1/
β β βββ [api1]/
β β β βββ api.rs
β β β βββ request.rs
β β β βββ response.rs
β β βββ [api2]/
β β β βββ api.rs
β β β βββ request.rs
β β β βββ response.rs
β β βββ ...
β βββ v2/
β βββ [non-versioned-api]/
β βββ ...
β βββ middleware.rs
βββ tests/
β βββ libs/
β β βββ config..rs
β β βββ [lib1]/
β β βββ [lib2].rs
β β βββ ...
β βββ models/
β β βββ [engine1]/
β β β βββ [table1].rs
β β β βββ [table2].rs
β β β βββ ...
β β βββ [engine2]/
β β β βββ [table1].rs
β β β βββ [table2].rs
β β β βββ ...
β β βββ [table1].rs
β β βββ [table2].rs
β β βββ ...
β βββ routes/
β βββ v1/
β β βββ [api1]/
β β β βββ api.rs
β β β βββ request.rs
β β β βββ response.rs
β β βββ [api2]/
β β β βββ api.rs
β β β βββ request.rs
β β β βββ response.rs
β β βββ ...
β βββ v2/
β βββ [non-versioned-api]/
β βββ ...
β βββ middleware.rs
βββ Cargo.toml
βββ LICENSE
βββ README.md
Here are several key points to note:
bin
: Contains a rs file with the same name as the project.doc
: This directory is intended for complete documentation.libs
: Contains files other than the database and API-related components.models
: Designed primarily for table-based structures, and uses the database engine for separation.routes
: Contains the implementation of HTTP APIs.- Apart from implementing standard APIs, such as OAuth2, others should be versioned to differentiate them.
tests
: Corresponds one-to-one with thesrc
directory.
Dependencies
libs
andmodels
do not depend on any other folders.- In
routes
- The entire project's code initialization is centralized in
routes/mod.rs
. - This approach reduces the workload for
main.rs
and increases the coverage of integration testing.
- The entire project's code initialization is centralized in
- Modules inside
models
should not depend on each other. If there are shared functionalities, implement them in the parent module and reference them as needed. The same applies to modules withinroutes
.
Code Style
Using rustfmt
Please make sure to ALWAYS use rustfmt
to format all files. We recommend using VSCode with the
rust-analyzer extension for writing code.
Below is the author's development environment for your reference:
-
VSCode Extensions
- CodeLLDB (Vadim Chugunov)
- crates (Seray Uzgur)
- Docker (Microsoft)
- GitHub Actions (Mathieu Dutour)
- rust-analyzer (The Rust Programming Language)
- YAML (Red Hat)
-
VSCode Settings
{ "crates.listPreReleases": true, "editor.formatOnSave": true, "editor.renderWhitespace": "all", "editor.roundedSelection": false, "editor.tabSize": 4, "files.eol": "\n", "rust-analyzer.inlayHints.chainingHints.enable": false, "rust-analyzer.inlayHints.closingBraceHints.enable": false, "rust-analyzer.inlayHints.parameterHints.enable": false, "rust-analyzer.inlayHints.typeHints.enable": false, "rust-analyzer.server.extraEnv": { "RUSTFLAGS": "-C instrument-coverage" } }
The use of the
-C instrument-coverage
environment variable is due to the author's need to generate coverage reports during testing. Adding this variable prevents recompilation triggered by saving and running tests. Below is the command for running tests:RUSTFLAGS="-C instrument-coverage" cargo test -p $PROJ --test integration_test -- --nocapture
MVC vs. Microservices
I prefer a bottom-up development approach. Using an architecture like MVC, which designs the
database as a lower-level generic interface, and implementing various functionalities called by the
API upper layer, aligns well with my personal style. This is the reason behind the creation of
models
and routes
.
However, when designing the entire Sylvia-IoT platform, I also aimed for modularity and chose a microservices-based approach (i.e., ABCD), strictly adhering to the principle of hierarchical dependencies.
Even with a microservices architecture, as described in the previous section
Directory Structure, when main.rs
references the required routes
, the entire
project can still be compiled into a single executable file and run on a single machine. This
design offers several deployment options, such as:
- Monolith: Running a single all-in-one executable on a single machine.
- Microservices cluster: Running each component independently on different machines, with each component setting up its own cluster.
- Monolith cluster: Running the all-in-one on multiple machines to form a clustered architecture.
Sylvia-IoT embodies the combination of both MVC and microservices design π.
File Content Arrangement
Each rs file is structured in the following way, with blank lines separating each section:
#![allow(unused)] fn main() { use rust_builtin_modules; use 3rd_party_modules; use sylvia_iot_modules; use crate_modules; pub struct PubStructEnums {} struct PrvStructEnums {} pub const PUB_CONSTANTS; const PRV_CONSTANTS; pub pub_static_vars; static prv_static_vars; impl PubStructEnums {} pub fn pub_funcs {} impl PrvStructEnums {} fn prv_funcs {} }
The general order is as follows:
- Using modules
- Structures
- Constants
- Variables
- Functions (including structure function implementations)
Within each section, pub
comes before private.
Model
The Model layer must provide a unified struct and trait interface. In the design philosophy of Sylvia-IoT, "plug-and-play" is a concept that is highly valued. Users should be able to choose appropriate implementations in different scenarios.
Database Design
When providing CRUD operations, the following order must be followed:
- count
- list
- get
- add
- upsert
- update
- del
Some points to note:
- count and list should provide consistent parameters so that the API and UI can call count and list in a consistent manner.
- Logger should not be used in the
model
. Errors should be returned to the upper layer to print the messages.- When multiple APIs call the same
model
, errors printed from the model cannot determine who made the call.
- When multiple APIs call the same
- When data cannot be retrieved, return
None
or an emptyVec
, not anError
. - Any database that can fulfill the "complex query" condition should be implementable using the same
trait interface.
- SQL, MongoDB meet this requirement.
- Redis cannot be designed in the form of a database.
Cache Design
- Any key-value store that can fulfill low-complexity read and write should be implementable
using the same trait interface.
- Redis, language-specific maps meet this requirement.
- SQL, MongoDB can also be implemented through querying a single condition. Using SQL or MongoDB for cache implementation is allowed when the system does not want to install too many different tools.
Routes (HTTP API)
In this section, the documentation and rules for implementing APIs are provided.
Verb Order
- POST
- GET /count
- GET /list
- GET
- PUT
- PATCH
- DELETE
Path
/[project]/api/v[version]/[function]
/[project]/api/v[version]/[function]/[op]
/[project]/api/v[version]/[function]/{id}
There is a potential ambiguity: [op]
and {id}
. The former represents a fixed action, while the
latter represents a variable object ID. When designing IDs, it is essential to avoid conflicts with
the names of actions.
When mounting routes using axum, the fixed
[op]
should be placed before the variable{id}
.
For example, let's consider the Broker's Device API:
- Device APIs
- POST /broker/api/v1/device Create device
- POST /broker/api/v1/device/bulk Bulk creating devices
- POST /broker/api/v1/device/bulk-delete Bulk deleting devices
- GET /broker/api/v1/device/count Device count
- GET /broker/api/v1/device/list Device list
- GET /broker/api/v1/device/{deviceId} Get device information
Here, you can see that the POST method handles creating single devices, bulk creating devices, and
bulk deleting devices. The bulk, bulk-delete, count, list are the previously
mentioned [op]
.
The design of device IDs should avoid conflicts with count and list.
Function Naming
The functions in api.rs
are named as follows:
fn [method]_[function]_[op]() {}
Continuing with the previous device API example, the functions would be named like this:
fn post_device() {}
fn post_device_bulk() {}
fn post_device_bulk_del() {}
fn get_device_count() {}
fn get_device_list() {}
fn get_device() {}
Request and Response Naming
Path variables, queries, and request bodies are defined in request.rs
, while response bodies are
defined in response.rs
. The naming convention is as follows (pay attention to capitalization):
struct [Id]Path {}
struct [Method][Function]Body {}
struct Get[Function]Query {}
For example:
struct DeviceIdPath {} // /device/{deviceId}
struct PostDeviceBody {}
struct GetDeviceListQuery {}
Writing Tests
Sylvia-IoT adopts the BDD (Behavior-Driven Development) approach for writing integration tests, and the chosen testing framework laboratory is based on Mocha.
This section will focus on the principles and techniques for writing tests for libs, models, and routes.
TestState
The TestState
structure is used as a parameter for SpecContext()
. It keeps track of several
variables:
- Variables that exist for a long time and only need to be initialized once or very few times, such
as
runtime
andmongodb
. - Resources that need to be released in
after
. Since test cases may exit abruptly, it is essential to release resources inafter
.
libs
- Simple functions can be tested directly for their inputs and outputs.
- Before testing, ensure to start the necessary infrastructure, such as RabbitMQ, EMQX, etc.
- For more complex scenarios that require services to be set up, you can create the services (e.g.,
queue connections) in
before
and release them inafter
.
models
- Before testing, make sure to start MongoDB, Redis, and other databases.
- The test order should be R, C, U, D.
- R: Use
mongodb
,sqlx
, or other native packages to create a test dataset, then test the results of model's get, count, and list functions. - C: Use model's add, upsert, or other functions to create data and validate its correctness using get.
- U: Use model's add, upsert, or other functions to create a test dataset, then use update to modify the data, and finally validate the result using get.
- D: Use model's add, upsert, or other functions to create a test dataset, then use delete to delete the data, and finally validate the result using get.
- Test R functionalities first to enable writing C, U, D test cases using unified code and determine if the same logic results in the same outcome for each database engine. When introducing new engines, you can write minimal test code for testing.
- R: Use
- Use native packages for deleting in
after
. This is because you cannot guarantee that D-related functionalities are correctly implemented and tested before testing.
routes
- Although you can use axum's
TestServer::new()
as a virtual service, services required by middleware or API bridges need to be started using Tokio Task. - You can use model trait interfaces for initializing test datasets and data validation after API requests.
- You can use model delete to delete test data in
after
.
Cross-Platform Compilation
Sylvia-IoT is primarily developed for the x86-64 Linux platform. However, thanks to Rust's inherent cross-platform capabilities, Sylvia-IoT can also be compiled into executable binaries for different platforms. This chapter will introduce the compilation process for several platforms that the author has tested.
The compiled executable should be able to run on compatible environments. For example, a Windows 10 executable should also be executable on Windows 7 or Windows 11.
The compilation environment is based on Ubuntu-22.04.
Windows 10 64-bit
rustup target add x86_64-pc-windows-gnu
rustup toolchain install stable-x86_64-pc-windows-gnu
sudo apt -y install mingw-w64
echo -e "[target.x86_64-pc-windows-gnu]\nlinker = \"/usr/bin/x86_64-w64-mingw32-gcc\"\nar = \"/usr/bin/x86_64-w64-mingw32-ar\"\n" >> ~/.cargo/config
cargo build --target=x86_64-pc-windows-gnu -p sylvia-iot-coremgr
Raspberry Pi OS 64-bit
rustup target add aarch64-unknown-linux-gnu
sudo apt -y install gcc-aarch64-linux-gnu
echo -e "[target.aarch64-unknown-linux-gnu]\nlinker = \"/usr/bin/aarch64-linux-gnu-gcc\"\n" >> ~/.cargo/config
cargo build --target=aarch64-unknown-linux-gnu -p sylvia-iot-coremgr
Raspberry Pi OS 32-bit
rustup target add armv7-unknown-linux-gnueabihf
sudo apt -y install gcc-arm-linux-gnueabihf
echo -e "[target.armv7-unknown-linux-gnueabihf]\nlinker = \"arm-linux-gnueabihf-gcc\"\n" >> ~/.cargo/config
cargo build --target=armv7-unknown-linux-gnueabihf -p sylvia-iot-coremgr
Appendix
Chapter Contents:
- Using resource origins.
- Supplementary projects.
Data Sources
Icons
The icons used in this document are sourced from the following locations:
- draw.io
- SVG Repo
- EMQX
- In the case where SVG Repo is unavailable, the EMQX icon used in the architecture diagram is copied from Docker Hub.
If there is any copyright infringement, please contact me to inform.
Supplementary Projects
- sylvia-router
- A basic routers that integrates auth/broker/coremgr/data components.
- Supports multiple WAN interfaces and a single LAN bridge.
- (Optional) Supports WiFi WAN and WiFi LAN.
- stress-simple
- A simple stress program for testing the forwarding speed of the Broker.
- Provides latency data for maximum, minimum, average, and P50/P80/P90/P95/P98/P99.
- sylvia-iot-examples
- Contains applications and network examples implemented using the SDK.
- lora-ifroglab
- iFrogLab LoRa USB Dongle
- Implements corresponding network services and communicates directly with device endpoints.
- app-demo: Receives sensor data from the lora-ifroglab devices and displays temperature, humidity, RSSI, etc.
- sylvia-iot-simple-ui
- Provides a simple Sylvia-IoT UI.
- coremgr-cli provides complete functionality, and the UI provides necessary operational functions based on the screen layout.
- In addition to auth/broker/coremgr/data, it also integrates router and examples.
- sylvia-iot-go
- Components implemented in Go.
- Includes general-mq, sdk, etc.
- sylvia-iot-node
- Components implemented in Node.js.
- Includes general-mq, sdk, etc.
- sylvia-iot-deployment
- Provides deployment solutions, such as K8S, and more.