We followed the Minimum Information for publication of Quantitative real-time PCR Experiments, or the MIQE guidelines, whenever applicable [31]. We performed additional tests to evaluate FungiQuant performance when background human DNA is present. We included seven template conditions: plasmid standards alone and plasmid standards with 0.5 ng, 1 ng, 5 ng, and 10 ng of human DNA per reaction in 10 μl reactions, as well as plasmid standards alone and plasmid standards with 1 ng human DNA in 5 μl reactions. For each condition assessed, we performed three qPCR runs to assess reproducibility. In each run, three replicate standard curves were tested across the 384-well plate to assess repeatability. Details for the data analysis can be found in Additional file 1: Methodological Details.
This work was supported by the National Institutes of Health (R01AI087409-01A1, R15DE021194-01), the Department of Defense (W81XWH1010870), the TGen Foundation, the Northern Arizona University Technology and Research Initiative Fund (TRIF) fund, and the Cowden Endowment in Microbiology at Northern Arizona University. We thank Tania Contente-Cuomo, Jordan L. Buchhagen, and Bridget McDermott at the Translational Genomics Research Institute for assistance with the real-time PCR portion of the work presented in this manuscript.
Real Time System By Liu Pdf To Excel
Download Zip: https://shurll.com/2vzO9G
For traditional machine vision-based plant diseases and pests detection method, conventional image processing algorithms or manual design of features plus classifiers are often used [2]. This kind of method usually makes use of the different properties of plant diseases and pests to design the imaging scheme and chooses appropriate light source and shooting angle, which is helpful to obtain images with uniform illumination. Although carefully constructed imaging schemes can greatly reduce the difficulty of classical algorithm design, but also increase the application cost. At the same time, under natural environment, it is often unrealistic to expect the classical algorithms designed to completely eliminate the impact of scene changes on the recognition results [3]. In real complex natural environment, plant diseases and pests detection is faced with many challenges, such as small difference between the lesion area and the background, low contrast, large variations in the scale of the lesion area and various types, and a lot of noise in the lesion image. Also, there are a lot of disturbances when collecting plant diseases and pests images under natural light conditions. At this time, the traditional classical methods often appear helpless, and it is difficult to achieve better detection results.
The basic process of two-stage detection network (Faster R-CNN) is to obtain the feature map of the input image through the backbone network first, then calculate the anchor box confidence using RPN and get the proposal. Then, input the feature map of the proposal area after ROIpooling to the network, fine-tune the initial detection results, and finally get the location and classification results of the lesions. Therefore, according to the characteristics of plant diseases and pests detection, common methods often improve on the backbone structure or its feature map, anchor ratio, ROIpooling and loss function. In 2017, Fuentes et al. [59] first used Faster R-CNN to locate tomato diseases and pests directly, combined with deep feature extractors such as VGG-Net and ResNet, the mAP value reached 85.98% in a dataset containing 5000 tomato diseases and pests of 9 categories. In 2019, Ozguven et al. [60] proposed a Faster R-CNN structure for automatic detection of beet leaf spot disease by changing the parameters of CNN model. 155 images were trained and tested. The results show that the overall correct classification rate of this method is 95.48%. Zhou et al. [61] presented a fast rice disease detection method based on the fusion of FCM-KM and Faster R-CNN. The application results of 3010 images showed that: the detection accuracy and time of rice blast, bacterial blight, and sheath blight are 96.71%/0.65 s, 97.53%/0.82 s and 98.26%/0.53 s respectively. Xie et al. [62] proposed a Faster DR-IACNN model based on the self-built grape leaf disease dataset (GLDD) and Faster R-CNN detection algorithm, the Inception-v1 module, Inception-ResNet-v2 module and SE are introduced. The proposed model achieved higher feature extraction ability, the mAP accuracy was 81.1% and the detection speed was 15.01FPS. The two-stage detection network has been devoted to improving the detection speed to improve the real-time and practicability of the detection system, but compared with the single-stage detection network, it is still not concise enough, and the inference speed is still not fast enough.
Compared with the traditional convolutional neural network, the SSD selects VGG16 as the trunk of the network, and adds a feature pyramid network to obtain features from different layers and make predictions. Singh et al. [63] built the PlantDoc dataset for plant disease detection. Considering that the application should predict in mobile CPU in real time, an application based on MobileNets and SSD was established to simplify the detection of model parameters. Sun et al. [64] presented an instance detection method of multi-scale feature fusion based on convolutional neural network, which is improved on the basis of SSD to detect maize leaf blight under complex background. The proposed method combined data preprocessing, feature fusion, feature sharing, disease detection and other steps. The mAP of the new model is higher (from 71.80 to 91.83%) than that of the original SSD model. The FPS of the new model has also improved (from 24 to 28.4), reaching the standard of real-time detection.
YOLO considers the detection task as a regression problem, and uses global information to directly predict the bounding box and category of the object to achieve end-to-end detection of a single CNN network. YOLO can achieve global optimization and greatly improve the detection speed while satisfying higher accuracy. Prakruti et al. [65] presented a method to detect pests and diseases on images captured under uncontrolled conditions in tea gardens. YOLOv3 was used to detect pests and diseases. While ensuring real-time availability of the system, about 86% mAP was achieved with 50% IOU. Zhang et al. [66] combined the pooling of spatial pyramids with the improved YOLOv3, deconvolution is implemented by using the combination of up-sampling and convolution operation, which enables the algorithm to effectively detect small size crop pest samples in the image and reduces the problem of relatively low recognition accuracy due to the diversity of crop pest attitudes and scales. The average recognition accuracy can reach 88.07% by testing 20 class of pests collected in real scene.
The breakthroughs achieved in the existing studies are amazing, but due to the fact that there is still a certain gap between the complexity of the infectious diseases and pests images in the existing studies and the real-time field diseases and pests detection based on mobile devices. Subsequent studies will need to find breakthroughs in larger, more complex, and more realistic datasets.
Compared with traditional methods, deep learning algorithms have better results, but their computational complexity is also higher. If the detection accuracy is guaranteed, the model needs to fully learn the characteristics of the image and increase the computational load, which will inevitably lead to slow detection speed and can not meet the needs of real-time. In order to ensure the detection speed, it is usually necessary to reduce the amount of calculation. However, this will cause insufficient training and result in false or missed detection. Therefore, it is important to design an efficient algorithm with both detection accuracy and detection speed.
Plant diseases and pests detection methods based on deep learning include three main links in agricultural applications: data labeling, model training and model inference. In real-time agricultural applications, more attention is paid to model inference. Currently, most plant diseases and pests detection methods focus on the accuracy of recognition. Little attention is paid to the efficiency of model inference. In reference [108], to improve the efficiency of the model calculation process to meet the actual agricultural needs, a deep separable convolution structure model for plant leaf disease detection was introduced. Several models were trained and tested. The classification accuracy of Reduced MobileNet was 98.34%, the parameters were 29 times less than VGG, and 6 times less than MobileNet. This shows an effective compromise between delay and accuracy, which is suitable for real-time crop diseases diagnosis on resource-constrained mobile devices.
In addition, image databases of different kinds of plant diseases and pests in real natural environments are still in the blank stage. Future research should make full use of the data information acquisition platform such as portable field spore auto-capture instrument, unmanned aerial vehicle aerial photography system, agricultural internet of things monitoring equipment, which performs large-area and coverage identification of farmland and makes up for the lack of randomness of image samples in previous studies. Also, it can ensures the comprehensiveness and accuracy of dataset, and improves the generality of the algorithm.
Mechanical analysis of movement plays an important role in clinical management of neurological and orthopedic conditions. There has been increasing interest in performing movement analysis in real-time, to provide immediate feedback to both therapist and patient. However, such work to date has been limited to single-joint kinematics and kinetics. Here we present a software system, named human body model (HBM), to compute joint kinematics and kinetics for a full body model with 44 degrees of freedom, in real-time, and to estimate length changes and forces in 300 muscle elements. HBM was used to analyze lower extremity function during gait in 12 able-bodied subjects. Processing speed exceeded 120 samples per second on standard PC hardware. Joint angles and moments were consistent within the group, and consistent with other studies in the literature. Estimated muscle force patterns were consistent among subjects and agreed qualitatively with electromyography, to the extent that can be expected from a biomechanical model. The real-time analysis was integrated into the D-Flow system for development of custom real-time feedback applications and into the gait real-time analysis interactive lab system for gait analysis and gait retraining. 2ff7e9595c
Comments