The danger of adversarial attacks to unprotected Uncrewed Aerial Vehicle
(UAV) agents operating in public is growing. Adopting AI-based techniques and
more specifically Deep Learning (DL) approaches to control and guide these UAVs
can be beneficial in terms of performance but add more concerns regarding the
safety of those techniques and their vulnerability against adversarial attacks
causing the chances of collisions going up as the agent becomes confused. This
paper proposes an innovative approach based on the explainability of DL methods
to build an efficient detector that will protect these DL schemes and thus the
UAVs adopting them from potential attacks. The agent is adopting a Deep
Reinforcement Learning (DRL) scheme for guidance and planning. It is formed and
trained with a Deep Deterministic Policy Gradient (DDPG) with Prioritised
Experience Replay (PER) DRL scheme that utilises Artificial Potential Field
(APF) to improve training times and obstacle avoidance performance. The
adversarial attacks are generated by Fast Gradient Sign Method (FGSM) and Basic
Iterative Method (BIM) algorithms and reduced obstacle course completion rates
from 80% to 35%. A Realistic Synthetic environment for UAV explainable DRL
based planning and guidance including obstacles and adversarial attacks is
built. Two adversarial attack detectors are proposed. The first one adopts a
Convolutional Neural Network (CNN) architecture and achieves an accuracy in
detection of 80%. The second detector is developed based on a Long Short Term
Memory (LSTM) network and achieves an accuracy of 91% with much faster
computing times when compared to the CNN based detector.