The questions arose during instrument design as well as when responses were arriving

In addition to evaluating pig behaviors through RGB videos, automation can be further facilitated by other sources. For example, videos containing depth information are useful to estimate pig body weights. Body weight is a critical trait associated with growing rate, feed efficiency, and meat biomass. Conventionally, pigs are weighed on the electronic scale in the pen, but it can be either inaccurate when more than one pig is standing on the scale or expensive if the scale is integrated into the feeding system. A past study has presented a video-based pipeline that can successfully estimate pig body weight with an RGB-depth camera by segmenting pig contours . By combining the existing work and VTag, which can continuously track pig positions, the fully automatic system of pig weighing is feasible for farms with limited resources. The implemented trackers have successfully shown their great performance of tracking objects in their published papers . However, the trackers failed to track pigs without any human supervision in our presented results. The potential reasons may be explained by the difference in the monitoring context. In our study, the tracked objects share similar morphological features. Even for the feature-rich areas, such as pig heads and tails where unique spatial patterns are observed, they were hard for trackers to distinguish the difference between different pig individuals. The trackers easily lost tracking the correct individual when two pigs frequently interact with each other in a short period. Another reason is the video quality. In the papers where the trackers were published, the demonstrated videos recorded at least 20 FPS. Whereas the studied datasets only have 6 FPS,vertical indoor growing system which is a common setting in practical farming to reduce power dissipation and save data storage. Such low FPS videos reduce the similarity between consecutive frames and increase the chance of mismatching tracked features over time .

Additionally, when the tracked object moves rapidly, the track features are more likely to become blurry in a low-FPS video. These limitations make the tracking task in livestock farming more difficult than the regular tracking task, where videos have 30 FPS, and the video frames are assumed to be similar in adjacent frames and the tracked features are unique compared to other objects. We also included pre-trained models, YOLOv5 and Mask R-CNN, in our benchmark study. The low-FPS results indicate that it is difficult to fulfill real-time long-term monitoring in livestock farming without accessing GPU resources. Although we did not show their precision in the current work, the detection results are not comparable with the presented trackers. It is because the models were trained by COCO datasets, in which top-view pig images are not included. In some video frames, pigs are either not detected, or 2 adjacent pigs are identified as the same object. Besides, without further modification of the models, it cannot force tracking the certain number of objects. These limitations make the evaluation difficult when we want to compare the precision of tracking the same number of pigs. In conclusion, the results suggest that the object detection models are not as suitable as object tracking algorithms in the pig monitoring tasks. Further improvement can be made in the current version of VTag. For example, VTag is found to mis-identified pig identities when individuals frequently contact each other in a short time as described earlier. Although the wrong labels can be corrected manually, it still requires time and effort from humans’ supervision. One way to reduce such error is to utilize a strategy called template learning, which was discussed in the literature . The general idea of this strategy is to first select the video frames in which pigs are not in close range of their neighbors. Then, the pig morphology observed in the selected frames is learned as “templates”.

Finally, the model can use the templates to update the predictions in the frames where pig positions are mis-identified due to the closed distance between pigs. In addition to improving the algorithm, adding information by wearable devices is also helpful to increase the monitoring precision. The devices, including motion sensors, magnetometers, gyroscopes, and GPS receivers, have been widely used to monitor behavioral patterns in large farming environments . Specifically, with the use of tracking collar wore by pastured livestock, grazing behaviors were successfully identified for cattle and sheep . The spatial resolution for outdoor studies was further improved to centimeter-level by coupling sensor collars with signal receivers deployed around the farm . In group-housed scenarios, Smartbow , a commercialized ear-tag sensor system, also demonstrated promising results in monitoring complex interactions on reproduction traits in swine cohorts and feeding behaviors of cows . In conclusion, by coupling the VTag algorithm and the described improvement, the automation of the assessment system is expected to monitor more complex farm settings.Much consultation went into developing the survey instrument, including several preliminary discussions with farmers. Cognizant of how besieged many feel by requests for information, we wanted to make ours as welcome or at least as tolerable as possible. Most farmers advised that, although their time was limited, they were not strongly predisposed to either participate in or refuse any survey that might come along. Rather, their decisions to respond would depend on how a given survey was presented, how relevant its content appeared. and how easily it could be completed. A notable guideline that was offered by one Fresno area iarmer and coniirmed by others was, “Don’t make me go to my file cabine!.” The survey requested information from farm operators about labor procurement, compensation, other personnel management practices, administration, legal compliance, and iuture outlook.

Within these broad areas the questionnaire contained specific items on employee recruitment channels, engagement offann labor contractors and custom harvesters, use of the public Employment Service, pre-employment screening and hiring procedures, pay basis and wage structure, fringe benefits, supervision and communications with workers, use of personnel management professionals, record keeping, compliance reporting, and contact with government agencies. The questionnaire is in Appendix 1. Content, wording, organization, and fonnat were refined over a two-month period. The draft instrument was reviewed in detail by two University of California Cooperative Extension fann advisors and pretested with three fanners who operate businesses quite different in size and other basic respects. The main objective of the pre-tests was to identify problems with meanings of tenns, clarity of questions and appropriateness of multiple choices. This phase yielded valuable guidance for refinements incorporated in the final version of the questionnaire. A shortened version, sent in the third mailing to two-thirds of non-respondents , is in Appendix 2.The California Employment Development Department provided identification and employment data on a specified study population1 from its file of employer unemployment insurance reports. The data record on each farm employer who paid wages during any quarter in 1991 included: name, mailing address, county code, and SIC code; wages paid in each quarter of 1991; and number of employees in each month of 1991. The population consisted of approximately 22,537 fann businesses. It excluded a total of 13,691 agricultural employers in the following categories during 1991: farm labor contractors; nurseries; veterinary services; other animal services; landscape and horticultural services; and grape growers in Fresno, Tulare, and Kern Counties. Each business in the VI file is typed by a 4-digit standard industrial classification code in the 01, 02, and 07 series . Of the 1991 monthly average number of job-holders in all agriculture , about 54 percent were employed by the target population of fann operators. Vse of the VI data base to identify California fann owners and operators suffers from two broad problems: incomplete coverage or entries , and the imprecise basis for employer groupmg. The imprecision problem stems from both the requirement that employers declare a single SIC when setting up accounts with EDD and the ambiguity built into the very structure of the SIC classification system. Because the SIC structure mixesclasses defined by commodity and by function ,rolling tables the full complement of employers cannot be sorted on either basis. The crops actually produced by even those farm businesses properly classified by a commodity code may be difficult to identify, if the farms are diversified operations or if the classes they fall into are broadly defined . Moreover, reliance on SIC codes to define bounds of the population could have caused errors of false inclusion or exclusion, most likely of businesses with both farm and nonfarm operations. Reports from farmers who also run retail outlets or catalogue sales, for example, may r may not be under a crop code. Some farmers who had been initially classified under nonfarm SICs were excluded from the population tape provided. On the other hand, some businesses that no longer operate farms but have UI records that are still tagged with commodity SICs, were included in the population.

The latter type of case is less problematic than the former, as questionnaire recipients who should not have been in the population could easily exclude themselves from consideration. But misclassified non-recipients who should have been but were not included in the population had no chance of getting selected to the sample. Even after screening businesses in the UI file by SIC code, we faced many questions about who should or should not be included.A first step toward minimizing invalid selections to the sample was to remove from the population file all entities which reported not a single employee or dollar of payroll in 1991, and to which, if still in business, questions on labor management were not likely to be relevant. Family-run farms that procure all their paid labor through service contracts might have thus, however, been eliminated in error. A second step was to give an explicit option on the instrument for recipients who do not Own or operate farms to select themselves out. Nevertheless, establishing a clear definition of “farm owner and operator” proved more important and difficult than anticipated.The survey was designed to include farm operators from all size groups, geographic regions, and commodities represented in the population. A less extensive survey that we conducted in 1987 had a 25 percent rate of response. The initial plan for this study was to obtain 1,000 responses by sampling approximately 5,000 farmers, conservatively assuming 20 percent participation. Ultimately we altered the strategy to pursue a like number of responses by eliciting higher participation from a smaller sample. Because the proportion of larger employers in the farmer population is much smaller than the shares of production and labor they manage in California, we stratified the population by size and oversampled from the larger-size strata. Sampling was random within each of the seven size strata, thus selecting farm businesses for the survey from all regions and commodity groups. The size measure used in stratifying the population was total wages paid in 1991, computed for each business in the UI file as the sum of its four quarterly wage figures. Businesses with both zero wages in every quarter and zero employment in every month were eliminated from consideration. The smallest 25 percent of the remaining population consisted of business with reported annual wages up to $9,617, the next 25 percent <those below the 50th percentile had wages up to $32,135, the next 25 percent up to $104,728, the next 15 percent up to $294,806, the next 5 percent up to $571,435, the next 4 percent up to $1,837,310, and the top 1 percent had wages exceeding $1,837,310. The total survey sample of 2,500 was drawn such that 1,375 were from the smallest three size groups combined , 375 05 percent from the next group , 625 from the next two groups combined , and 125 from the top-size group . Businesses selected to the sample that had incomplete addresses on file were replaced through random drawings from their respective size strata. Using employment or pavroll data from the UI file as indicators of farm business size may have led to mis-classification by size, because the amount and cost of labor procured through contract is not represented in these UI records.