Skip to main content
ARS Home » Plains Area » Clay Center, Nebraska » U.S. Meat Animal Research Center » Genetics and Animal Breeding » Research » Publications at this Location » Publication #390019

Research Project: Identifying Genomic Solutions to Improve Efficiency of Swine Production

Location: Genetics and Animal Breeding

Title: Posture detection of sows housed in farrowing crates using composite image models

item MADERIA PACHECO, VERONICA - Universidad De Sao Paulo
item BROWN-BRANDL, TAMI - University Of Nebraska
item SHARMA, RAJ - University Of Nebraska
item DE SOUSA, RAFAEL - Universidad De Sao Paulo
item Rohrer, Gary
item MARTELLO, LUCIANE - Universidade De Sao Paulo

Submitted to: European Conference on Precision Agriculture Proceedings
Publication Type: Proceedings
Publication Acceptance Date: 2/8/2022
Publication Date: N/A
Citation: N/A

Interpretive Summary:

Technical Abstract: Determining changes in sow posture can provide information on the production and health of animals. However, manually evaluating images is extremely time-consuming, as are some standard image processing approaches. The use of deep learning techniques has the advantage of being a more efficient method when compared to traditional image processing. However, transition sow postures such as sitting, and kneeling are difficult to capture using RGB images alone. The aim of this study is to compare the use of different images as input to models based on deep learning for detection of sow postures. Using Kinect v.2 cameras, images were collected from 7 sows housed in farrowing crates. A total of 4229 images were labeled manually according to the postures (standing, kneeling, sitting, and ventral recumbency and lateral recumbency). Deep learning algorithms (AlexNet) were adapted to detect sow postures from 5 types of images: color (CNNrgb model), depth (depth image transformed into grayscale: CNNdepth), and three fused images composed with the color and depth images (CNNblend, CNNdiff, and CNNfcolor). The results showed that depth and fused models presented the best results. CNNfcolor presented 95.5% of accuracy, followed by CNNdepth (94.3%) CNNblend (90.3%), and CNNdiff (86.7%). CNNrgb model presented 76.8% accuracy. The results of this study illustrate the improvement in the classification of postures using depth or fused image methods. Other studies may contribute to the development of increasingly rapid and accurate models by using a larger database, evaluating different fused methods, computational models, systems, breeds of sows, and incorporating additional postures.