Animals enrolled in this feed trail were also fitted with a CowManager ear tag accelerometer

Observations collected on a single animal over extended observation windows at high sampling frequencies can, however, contain a range of complex temporal patterns ± cyclicity, non-stationarity, autocorrelation, etc . Further, when sensors are applied to large heterogenous groups of animals housed socially in spatially restricted environments, recorded behaviors may also contain complex interdependencies between animals at the dyadic, triadic, clique, and herd levels . Failing to accommodate all these complex structural and stochastic features in a conventional model-based approach tostatistical inference risks returning spurious insights into the underlying behavioral dynamics. Developing such a model with a single PLF data stream can be challenging. Provided multiple data streams, however, the logistical challenges presented by model-based analytical frameworks can rapidly compound, creating significant barriers to cross-sensor inferences and thereby impeding researchers from extracting more holistic behavioral inferences from increasingly data-rich farm environments. Unsupervised Machine Learning tools may provide a more flexible and forgiving approach to knowledge discovery in the context of large sensor datasets . Such algorithms excel at identifying and characterizing complex nonrandom behavioral patterns lying beneath the stochastic surface of a dataset, while often employing relatively few structural assumptions about the data . Hierarchical clustering-based techniques offer an intuitive and highly adaptable approach to visualizing high dimensional datasets that is particularly well-suited to exploratory data analysis . Indeed, trim tray with screen by reducing the complex behavioral signals present in a sensor dataset into a series of discrete clusters, such algorithms may be viewed as an empirical extension of classical ethological techniques.

Discrete data, however, can be challenging to work with in most frequentist and even many Bayesian frameworks. Estimators based on information entropy, on the other hand, are purpose-made to quantify uncertainty in discretely encoded data without knowledge of the underlying distribution, andthus naturally complement hierarchical clustering-based algorithms . In these analyses, data mechanics algorithms were able to recover complex nonstationarity in the order in which cows entered the milking parlor. Some of these changes in queuing patterns could attributed to the shift to spring pasture access, but other transient and persistent shifts in entry order recovered in these encodings that may have been driven by environmental factors not experimentally recorded . Entropy-based non-parametric permutation tests were also successful in recovering preliminary evidence of significant nonlinear associations between encodings of entry order patterns and activity patterns recorded using ear tag accelerometers. In this paper we will explore how novel ensemble simulation techniques that emulate and adjust for the complex sources of error in PLF data streams may be used to produced more balanced encodings of multi-dimensional behavioral data. We also introduce a new dendrogram pruning algorithm that is able to efficiently repurpose these same ensemble simulations to ensure that that the power of hierarchical clustering tools do not exceed there solution of the sensor. Finally, we demonstrate the utility of information decomposition techniques within our existing non-parametric mutual information testing framework to better facilitate visual characterization of complex behavioral patterns across sensor data sets that might be overlooked in more conventional model-based analyses.

To demonstrate the efficacy of our analytical approach, data was repurposed from a feed trial assessing the impact of an organic fat supplement on cow health and productivity through the first 150 days of lactation. All animal handling and experimental protocols were approved by the Colorado State University Institution of Animal Care and Use Committee . The study ran from January through July in 2017 on a USDA Certified Organic dairy in Northern Colorado, enrolling a total of 200 cows over a 1.5 month period into a mixedparity herd of animals with predominantly Holstein genetics. Cows were maintained in a closed herd in an open-sided free stall barn, stocked at roughly half capacity with respect to both feed bunk spaces and stalls. Cows had free access to an adjacent outdoor dry lot while in their home pen, and beginning in April were moved onto pasture at night to comply with Organic grazing standards. Cows were milked three times a day, with free access to TMR between milkings, and were head locked each morning to facilitate data collection and daily health checks. For more details on feed trial protocols see Manriquez et al. and Manriquez et al. .In addition to standard production and health assessments, behavioral data was also obtained from several PLF data streams . Milking order, or the sequence in which cows enter the parlor to be milked, is automatically recorded as metadata in all modern RFID-equipped milking systems. Study cows were here milked in a DelProTM rotary parlor . At each morning milking, raw milking logs were exported from the parlor software, and the data processed to extract the single-file order that cows entered the rotary . A total of 80 milk order records ±26 recorded while cows remained overnight in a freestall barn, and 54 following the transition to overnight access to spring pasture ±were used to create discrete encodings for parlor entry patterns via data mechanics clustering .

The dendrograms summarizing the distribution of cow entry order patterns and subsequent heatmap visualizations will here be subjected to further analysis without modifications to the previously reported encodings. This commercial sensor platform, while designed and optimized for disease and heat detection, also provides hourly time budget estimates for total time engaged in five mutually exclusive discrete behaviors – eating, rumination, non-activity, activity, and high activity . Time budget data was collected on all animals for a contiguous period of 65 days , with the observation window beginning shortly after trial enrollment was completed on February 17th, and ending on April 23rd when the grazing season commenced and cows were moved overnight beyond the range of the receiver antennae. After dropping cows that were removed prematurely from the observation herd due to acute clinical illness, as well as several cows with persistent receiver failure, complete sensor records were available for 179 animals. In order to more fully focus on the logistical challenges in encoding and characterizing the complex multivariate dynamics of this system, we have here chosen to compress this data over the time axis to consider only the overall time budgets of these cows, and will leave explorations of the longitudinal and cyclical complexity of this dataset for future work.Domain constraints are not, however, the only stochastic feature that need be accommodated when working with time budget data. There is also the measurement error attributable to the sensor itself. Returning to the previous example, suppose that we also know that our rumination records are only accurate to r1hr. Is it then still appropriate to give more weight to the one-hour difference between Daisy and Delilah than between Betty and Betsy? Since both observations are within the bounds of error, attempting to enhance the underlying biological signal may instead only succeed in amplifying measurement noise. A closed form estimator, however, may not be readily generalizable to the wide range of measurement error models encountered with PLF sensors. We therefore propose that a simulation-based approach may offer a more flexible means of accounting for measurement error in dissimilarity estimates . The LIT package provides a built-in simulation utility for time budget data that seeks to mimic the stochastic error structure of the original data while still preserving the underlying behavioral signal . Data is provided as a tensor, with cow indexed on the first axis, time indexed on the second, and the component behaviors on the final axis. The count data at each cow-by-time index is then used to redraw a simulated data point from one of three optional distributions . In the first, the user may sample directly from a multi-nomial distribution centered around the normalized observed count vector. This model assumes that measurement error should shrink as a cow dedicates larger proportions of an observation window to specific behaviors, and intrinsically prevents estimates from being generated outside the domain of support. Variance can be under-estimated at the extremes of the domain, however, cannabis trimming trays if the probability for a behavior is non-negligible but the observed count is zero due to under-sampling. This issue may be addressed in sampling option two, where samples are redrawn from a Multivariate Beta Distribution , also known as a Dirichlet distribution, again parameterized using the normalized observed count. While this sampling strategy slightly biases the simulation towards the center of the distribution, it prevents undersampling at the extremes of the domain. Finally, users may combine these sampling strategies in sampling option three, wherein the probability vector used to parameterize the multinomial is drawn first from the Dirichlet, in order to further increase the uncertainty in the simulated data.

After simulation has been completed by redrawing samples at the finest level of temporal granularity supported by the sensor, the data can then be conditionally or fully aggregated along temporal axis as required for downstream analysis as a time budget. This simulation routine was used to create an ensemble of B = 500 simulated overall time budget matrices that mimicked the stochasticity attributable to a reasonable approximation of the measurement error of the sensor. Stored as a tensor with replication on the last axis, the variance of the ensemble of simulations could then be easily calculated for each combination of cow index and behavioral axis. If the underlying simulation strategy is a reasonable representation of the noise in the sensor, then these variance terms will then serve as a sufficient approximation of the relative uncertainty in each data point. We propose that that this information can then be incorporated into the calculation of dissimilarity estimates by servingas penalty terms in the calculation of an ensemble-weighted distance estimator defined in Equation 3.The rescaling strategy employed in our proposed dissimilarity estimator is strongly inspired by traditional Analysis of Variance techniques, thereby providing several insights into its anticipated behavior. First, because the simulations were generated using the multinomial or one of its analogs, we can infer that these penalty terms will not be homogenous across the domain of support, but should shrink as observations approach the boundary. This will allow the ensemble weighted distance estimator to emulate the rescaling dynamic achieved with the KL distance, but here rescaling at the extremes of the domain will ultimately be bounded by our simulated measurement error so as not to exceed the precision of the sensor. Second, because we have here emulated measurement error in our simulation using sampling uncertainty, the central limit theorem will apply . Thus, we can anticipate that as the number of observations per animals increases, the impact of measurement error on our inferences will shrink, allowing progressively more subtle differences between animals to come into resolution. Taking this property to its limit, however, can it be said that with enough observation minutes the differences between cows can be inferred with near certainty? That intuition, of course, is at odds with our characterization of a dairy herd as a complex system, and highlights an additional stochastic element that must be accommodated ±the behavioral plasticity of the cows themselves in response to changes in the production environment . Given the extended observation window of this particular data set, it would be possible to recalculate time budget conditional on day of observation, and then use the variance in daily time budget along each behavioral axis as a penalty term. Such estimates would collectively reflect heterogeneity in variance attributed domain constraints, measurement error, and behavioral plasticity. Such an approach would not, however, be feasible for datasets collected over shorter time intervals with fewer replications or in applications with behavioral responses where there is no clear hierarchy in the temporal structure of the same. We therefore propose that our stochastic simulation model can be extended to also provide a generalizable means to approximate the uncertainty of the underlying behavioral signal. As before, the measurement error was simulated by redrawing samples at the finest temporal granularity provided by the sensor. Prior to compression along the temporal axis, however, a random subsample of observations days was selected across all cows, and only these values used to calculate overall time budget. If all cows demonstrated comparable levels of consistency in their daily time budgets, then reducing the effective sample size of our simulated data sets through a subsampling routine would increase the ensemble variance estimates. This in turn would make our approximation of measurement error hyper conservative, but this increase would be uniform across all cows.