The conditions of various nuclear power plant facilities are regularly examined through manual inspections. Remote visual inspection is commonly applied and requires engineers to watch lengthy inspection footage and seek anomaly features therein. This is a labor-intensive process as anomaly features of interest usually only appear in very short segments of the original whole video. Therefore, an automated anomaly detection system is preferred to lessen the intensive labor cost in the inspection process. The detection process could also benefit from useful information that could potentially contribute to addressing reasoning traceability.

With a well-prepared training data set of the anomaly feature, a convolutional neural network (CNN) can be developed to automatically detect anomaly indications in the inspection video. However, false-positive detections may occur and can be difficult to remove without seeking manual verification. To overcome this problem, we present a new automated video-level anomaly detection framework that utilizes the latency mechanism to effectively lessen false-positive occurrences, and therefore, increase detection accuracy. In this framework, a CNN-based anomaly classifier first performs initial scanning of the anomaly type of interest in every region of the sampled frames. Then our latency mechanism is applied to refine the initial scanning results by flagging up a region as an “anomaly” indication only when “anomaly” is detected by CNN in the current frame and also in a sequence of previous consecutive frames of the same region.

We present a case study of crack feature detection in superheater inspection videos to illustrate the performance of the proposed framework. The results show that the latency mechanism can effectively remove the original false-positive detections seen in the initial scanning. In order to provide a primary exploration of suggesting possible formats for addressing reasoning traceability, knowledge graphs of the reasoning process in the video-level detection framework are built to provide a better understanding of why a specific section of the video is flagged as anomaly contents by the video-level detection framework.