Livestock researchers are exploiting genetic, genomic, and other biological data to improve the safety and quality of meat products used by consumers worldwide. These data are voluminous, dispersed throughout the world, and usually not very well associated with each other. Computational methods must be continually developed and implemented to filter, connect, store, analyse, share and contextualize these ever increasing data to address specific livestock research problems. This work involves computing over many domains, from the minute scale of a single nucleotide polymorphism in a single organism to the very large scale involving the comparison of entire mammalian genomes across species. My work, in part, involves building and maintaining data processing pipelines starting with "raw" data and finishing with processed/analyzed information accessible in a database or knowledgebase, where the analyzed information can be connected with other information from resources worldwide. Often, these pipelines must be engineered de novo because many livestock research questions are unique.
The computational methods I am concentrating on to improve our ability to acquire, process, store, manage, analyse and disseminate livestock data and knowledge involves designing, building, using, and maintaining ontologies (controlled vocabularies of data/facts), knowledgebases, and expert systems. Ontologies are used to organize facts about knowledge domains (e.g. phenotype, metabolic pathways). An expert system results from building logical rules that connect the data/facts in the ontology in such a way that computerized reasoners can use these rules to draw inferences about new relationships between the facts in the knowledgebases.