Skip to main content
ARS Home » Plains Area » Clay Center, Nebraska » U.S. Meat Animal Research Center » Genetics and Animal Breeding » Research » Publications at this Location » Publication #409452

Research Project: Multi-Dimension Phenotyping to Enhance Prediction of Performance in Swine

Location: Genetics and Animal Breeding

Title: Deep learning-based sow posture classifier using colour and depth images

Author
item PACHECO, VERONICA - Universidad De Sao Paulo
item BROWN-BRANDL, TAMI - University Of Nebraska
item DE SOUSA, RAFAEL - Universidad De Sao Paulo
item Rohrer, Gary
item SHARMA, RAJ - University Of Nebraska
item MARTELLO, LUCIANE - Universidade De Sao Paulo

Submitted to: Smart Agricultural Technology
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: 9/2/2024
Publication Date: 9/12/2024
Citation: Pacheco, V.M., Brown-Brandl, T.M., Vieira de Sousa, R., Rohrer, G.A., Sharma, S.R., Martello, L.S. 2024. Deep learning-based sow posture classifier using colour and depth images. Smart Agricultural Technology. 9. Article 100563. https://doi.org/10.1016/j.atech.2024.100563.
DOI: https://doi.org/10.1016/j.atech.2024.100563

Interpretive Summary: Preferred sow postures and frequency of postural changes can provide important information about the sow's physiological state and may help the farmer make wise decisions to improve herd productivity. Unfortunately, continuous monitoring of sow postures is not feasible in swine production. Video recorded data may provide a solution if combined with computer artificial intelligence/machine learning techniques. Machine learning has proven to be an efficient method for interpreting images and can be used in place of manual evaluation or traditional image processing methods. However, the transitional postures of sows, such as sitting and kneeling, are difficult to discern using only the conventional visual images, especially from overhead mounted cameras. The aim of this study was to develop and compare different computer methods to predict sow posture from from visual and depth camera images. Using Kinect v.2 cameras, visual and depth images were collected on 9 different sows housed individually in farrowing crates. A total of 26362 images were labeled manually according to the postures (“standing”, “kneeling”, “sitting”, “ventral lying”, and “lateral lying”). Computer algorithms were developed to detect sow postures from these images. The results showed that the models developed with depth images presented the best results in comparison to the other models. The best algorithm using depth images had an accuracy of 98.3%. The results of this study illustrate an improvement in posture classification using the model developed with depth images. Future studies will attempt to develop more accurate models by using more images and incorporating additional postures.

Technical Abstract: Assessing sow posture is essential for understanding their physiological condition and helping farmers improve herd productivity. Deep learning-based techniques have proven effective for image interpretation, offering a better alternative to traditional image processing methods. However, distinguishing transitional postures such as sitting and kneeling is challenging with only conventional top-view RGB images. This study aimed to develop and compare different deep learning-based sow posture classifiers using different architectures and image types. Using Kinect v.2 cameras, RGB and depth images were collected from 9 sows housed individually in farrowing crates. A total of 26,362 images were manually labelled by posture: “standing”, “kneeling”, “sitting”, “ventral recumbency” and “lateral recumbency”. Different deep learning algorithms were developed to detect sow postures from three types of images: colour (RGB), depth (depth image transformed into greyscale), and fused (colour-depth composite images). Results indicated that the ResNet-18 model presented the best results and that including depth information improved the performance of all models tested. Depth and fused models achieved higher accuracies than the models using only RGB images. The best model used only depth images as input and presented an accuracy of 98.3 %. The mean precision and recall values were 97.04 % and 97.32 %, respectively (F1-score = 97.2 %). The study shows improved posture classification using depth images. Future research can improve model accuracy and speed by expanding the database, exploring fused methods and computational models, considering different breeds of sows, and incorporating more postures. These models can be integrated into computer vision systems to automatically characterise sow behavior.