Virtual as well as real life agents executing and receiving tasks and missions in the Core System
An agent is a unit that can receive and perform tasks/missions received from other agents and services in the Core System. Agents can provide the Core System with information to be shared between agents and services. The agents are categorized into four levels based on their capability of acting autonomously.
Level 1 agents are usually sensors and are recognized through their capability to gather and publish data while lacking the capability of performing actions.
Level 2 agents have Level 1 capabilities, and are able to act on tasks that they are given. They also have the capacity to plan the path that they are given, thus performing at some level of autonomy.
Level 3 agents have the same capabilities as level 2 agents, however level 3 agents are combined with a cloud service that can extract information about missions and tasks from the agents. This results in the capacity to coordinate agents through a subsystem, by receiving a defined mission (which agent is performing the task, and a mission goal) from a user. The subsystem then sends the tasks to the defined agent and monitors the mission as it is executed and determine when or if a mission was executed successfully.
Level 4 agents have the same capabilities as a level 2 agents. They are also, like level 3 agents, combined with a cloud service. However, unlike level 3 agents, the missions that the agents receive do not have defined tasks, while still having a defined mission goal. This means that the level 3 agents have defined tasks to reach the goal, but the level 4 agents do not get specific tasks. It is instead the subsystem that decides and plans what tasks should be executed in order for the mission to reach its goal. A resource API is implemented, which enables the cloud service to ask for information about agents such as availability and other properties, such as capabilities and agent type, that effects the decision of which agent is suitable to perform a task when planning a mission.
A central aspect of the core system are the agents, both simulated and real, that are integrated into the system. To support users there are code examples demonstrating how simple agents can be integrated into the arena map and the integration map. The agents are classified as simple since they can perform simple task, but do not have any logic beyond that. The code examples are available in the GitHub repo that is linked below.
Anomaly Detector Agent
The Anomaly Detector Agent can be used for surveillance purposes and in search and rescue scenarios for example. In the Arena map the Anomaly Detector Agent makes it possible to continuously search an area for pre-selected Point Of Interest (POI) types. This is done by giving the Anomaly Detector Agent the task of searching for specific POI types in a selected area. If a POI of that selected type is detected, the Anomaly Detector Agent will publish an anomaly to MQTT, and a warning will be visible in the Arena map. This makes it possible for an operator to act on the warning.
The WARA-PS Software-In-The-Loop (SITL) Simulator is a simulator, built on ArduPilot, that can act as a UxV, i.e. as a multicopter, helicopter, plane, rover, boat or submarine with derivates. ArduPilot is normally executed in physical Flight Control Systems (FCS’s) and is a COTS, open sourced software with a huge community that integrates many FCS’s and related electronics as lidar sensors, GPS’s, altitude sensors etc.
Currently the simulator is used as a development tool for boats and rovers where validation tests are performed before deploying to real hardware and field tests. The simulator is built upon the same software that is running in WARA-PS Pixhawk controlled vehicles as Mini-USV’s and Mini-UGV’s.
Unmanned Surface Vehicle Drone Operator
One of the assets in the Core System is the Unmanned Surface Vehicle (USV) Drone Operator which is a drone operator that can currently be seen on the Arena map as a USV. A drone operator looks like an agent, but functions as a coordinator and decision maker. This is because the drone operator has access to a number of drones and can receive commands and task-requests, and can then decide, based on proximity, which drone is most suitable to perform the requested task. The drone that is deemed most suitable will then execute the task. In the future, more parameters that the drone operator can make its decision on can be added.