There are three steps to fitting the parametric density function to the farm size variables

As outlined in Sumner and Leiby and Sumner , the human capital element remains prevalent through economic explanations of farm exit. Of course, age of the farm operators plays key role. Macdonald et al. discuss the role of the advanced age of many dairy farmers and the fact that many dairy farms are family-run, suggesting that there will be an increase in exits as more farmers choose to retire. Furthermore, the study relates the probability of exit to farm size, finding that not only does the age of the operator increase the likelihood of exit, but the smaller the farm size also increases the probability of exit.This section discusses the sample used in this analysis and details changes in the COA questions that are relevant to this analysis. The research utilizes COA data from 2002, 2007,2012, and 2017 for six select states: California, Idaho, New Mexico, New York, Texas, and Wisconsin. The results presented have gone through a disclosure review process and no data on individual/farm-specific is specific to individual farms and instead characterizes them more generally. Although the COA is federally mandated, it does not collect data on every U.S. farm and as such weights responses to create the most accurate sample that reflects the true U.S. farm sample. As discussed in Chapter 2, I use a specific definition of a commercial dairy in order to capture dairies with significant engagement with the dairy industry. A commercial dairy for the purposes of this analysis is defined as a farm with at least 20 milk cows on the farm as of December 31 of the Census year and the farm must have dairy or milk sales revenue above the dollars of milk sale revenue that would have been generated by 30 milk cows.

The survey questions asked of farmers and ranchers by the COA change slightly every Census round, hydroponic flood tray although most remain the same across time. Below are descriptions of question changes for relevant variables to the analysis. First, in 2002 and 2007, farms were asked for the total amount of dairy sales in that year, but in 2012 and 2017, this question was dropped and replaced with the total amount of milk sales. Furthermore, whether the dairy farm had any level of organic production was only asked in 2007, 2012, and 2017. Second, operator characteristic questions have become more detailed over the years and allowed more information about operators to be collected. In 2002, 2007, and 2012, the COA asked detailed operator characteristic questions about up to three operators, but only one operator was identified as the principal operator. In 2017, the COA expanded its detailed operator questions to include up to four operators and allowed for up to four operators to be identified as principal operators. In this Chapter, the operators for which the number per farm is limited and detailed information is provided will be referred to as the “core operators.”There is other no limit to the number other operators listed per farm and only the gender of each such operator and the total number per farm are provided in the Census. The COA has three potentially relevant farm size variables for dairy farms, the number of milk cows, the value of farm production, and the value of milk or dairy sales. I utilize all three in this chapter. However, I focus particular attention on the number of milk cows for the kernel density graphs. I characterize the distributions of number of milk cows per commercial dairy farm using two approaches. One approach is to fit a nonparametric distribution by year, and by state for each year to the data on milk cow herd size per farm. The other approach is to fit two commonly used parametric distributions to characterize dairy farm size distributions for the national and individual states over census years.

One aim of my thesis is to characterize the farm size distribution of dairy farms and fitting parametric density functions serves as a starting point for characterizing and analyzing dairy size distribution. As explained above, there is previous literature that utilizes parametric distributions to characterize farm size and this research provides evidence that commonly used distributions do not fit well with the U.S. commercial dairy industry. It is common in farm size analysis to fit parametric density functions to characterize farm size distribution . I create kernel density plots for the herd size distribution by state across the years and then find and fit two common parametric density functions to the distribution. This section will be structures as follows: a brief overview of the mathematics used in fitting parametric density functions. First, I hypothesize based on the kernel density plots what distributions seem reasonable. For this analysis I use the log normal and the exponential function, as those are two common distributions used in farm size literature and are likely shapes for most farm size distributions. Lognormal is the typical selection, as it is referenced in Gibrat’s Law. The exponential distribution was selected because it can account for the same skewed shape but has more flexibility. Second, I estimate the parameters of interest needed to form that distribution in order to create an estimated distribution of random numbers that follow the specific distribution. For this analysis, the measures of farm size, the number of milk cows for each farm, are random variables x1, x2, x3, …, xn, where n is the sample size of farms, for which the joint distribution depends on distribution parameters. For example, using the log normal the parameters are the mean and variance, and there are two related parameters for the exponential distribution.

The estimates of the parameters are functions of the milk cow herd size variable in question. From there, we can calculate the estimates of these parameters to create a different distribution with those same parameters and compare them to the actual distribution of the number of milk cows. Some estimated parametric distributions appear to have slight irregularities, this is due to the number of observations and the impose parameters.This section will summarize the resulting farm size graphs and detail the trends across time and states. Overall, when looking at the six select states together commercial dairy farm distributions have shifted towards larger dairies. In 2002, there was a clear peak in the number of farms with less than 200 milk cows, but the peak falls significantly from 2002 to 2017 . Whereas farm size distribution shows a clear increase in the farms with larger herd sizes in 2017. Although this graph gives interesting detail about the trends in herd size for the U.S. overall it is mostly characterized by Wisconsin and New York which have a significantly larger share of the number of commercial dairies and tend to have smaller herd sizes relative to other states. This graph clearly shows that there remains a large share of dairies that have a herd size of less than 200 milk cows, despite the relative shift in herd size.Moving to state-specific trends, overall California dairies have had larger herd sizes than other states, such as New York or Wisconsin across all years . California had a peak in the share of dairies with less than 1,000 milk cows from 2002 to 2017, but the peak fell significantly between 2007 and 2012. There was a clear shift in 2012 with an increase in the 1,000 to 2,000 milk cow herd size in 2012 and then another shift in 2017 in the 2,000 to 3,000 milk cow herd size. This documents a clear movement of California dairies towards larger herd sizes and a decrease in smaller herd sizes. Idaho had a large peak in commercial dairies with less than 500 milk cows in 2002 and then a significant drop in that peak in 2007 with smaller subsequent decreases in 2012 and 2017 . Interestingly, in 2007 there was an increase in the number of dairies with a milk cow herd size between 500 to 1,000, hydro flood table but then a subsequent decrease the following year. In 2017 there was a clear increase in the number of commercial dairies with a milk cow herd size between 1,500 and 2,000. New Mexico had one of the more unique herd size distributions with no clear peak in the smaller herd size ranges . From 2002 to 2007, there was a clear drop in the density of commercial dairies with less than 1,000 milk cows and a relative increase in the density of commercial dairies with 1,000 milk cows. Then in 2012, there was a shift towards commercial dairies with more than 2,000 milk cows and a downward shift in commercial dairies in the 500 to 1,000 milk cow herd size range. This trend continued in 2017 with even further shifts in each direction. From 2002 to 2017, New York has seen a slight decrease in the smaller herd sizes and a little increase in the larger herd sizes .In Texas, the most distinct trend was a significant drop in the density of commercial dairies with herd sizes of less than 500 milk cows between 2012 and 2017 .

There has previously been a trend of decreases in this herd size range, but these follow a similar pattern as compared to most other states. However, in other states, there was not such a significant drop. In 2017, there was an increase in commercial dairies with more than 1,000 milk cows. There was a significant decrease the commercial dairies in Wisconsin with less than 100 milk cows from 2007 to 2012 and then again from 2012 to 2017 . In 2017, there was an increase in commercial dairies with a herd size of between 150 and 200 milk cows. Wisconsin’s dairy industry is characterized by a significant number of smaller dairies and few dairies with large milk cow herd sizes. Across the states, there is a trend of consolidation with few commercial dairies and an increase in the number of dairies with larger herd sizes. Despite the decrease in the number of farms in each state, the number of milk cows increased in some states and broadly remained relatively stable . California had a 6.7% increase in the number of milk cows from 2002 to 2017, but Idaho had a 55% increase. The number of milk cows in New Mexico and Wisconsin both remained roughly the same. There was a 6% decrease in the number of milk cows in New York and number of Texas grew by more than 70%. Neither of the two parametric distributions fit the national data well. In particular, both the log normal and exponential distributions failed to capture the very high mode at the low herd size in 2002. The herd sizes in California did not fit any distribution well in 2002 or 2017 . Idaho has a large peak in the smaller ranges that is well above either the log normal or the exponential distribution in 2002 . The herd size does fall significantly when looking at the Idaho herd size distribution in 2017, this does some what follow a log normal pattern, but not very well . New York follows a similar pattern with the smaller herd size peak being significantly higher than either the log normal or the exponential peaks in 2002 or 2017 . As we saw across years in Texas, the herd size shifted dramatically. In 2002, the herd size distribution slightly resembled a log normal trend but had definite deviations and in 2017 did not follow any distribution well . Wisconsin follows a similar pattern to New York with no clear distribution trend in 2002 or 2017, but with significantly high peaks in the lower herd size range that deviate from the distributions.As explained above there are several possible influences, but given the Census data, I have chosen the following variables: characteristics of the operators , farm sales diversification across commodities, and share of farm operators who have off-farm employment. I also account for state fixed effects and Census year fixed effects. Clearly sales diversification and off farm work are jointly determined with dairy farm size, so I do not claim to be measuring a causal impact in the regressions presented discussed in this section. The aim here is to discuss statistical relationships between these characteristics and the farm size measures because although they cannot be thought of as directly influencing farm size the relationship between such measures is of interest and allows for discussion about the characteristics of the U.S commercial dairy.

The liquid manure samples collected from flushed pits were termed as Flush Manure

However, the impact of animal waste-borne microbiomes on environment including soil, water, and plant is not well-understood. The United State Department of Agriculture estimates that there are approximately 450,000 animal feeding operations in the U.S., which include beef cattle, dairy, poultry and swine production industries. Annually, over 2 billion tons of animal manure are generated in the U.S. In California alone, 60 million tons of manure are produced annually by 5.2 million cattle and calves, and a considerable portion of the manure is applied onto cropland as fertilizers. While the use of manure as fertilizer in cropland has numerous benefits, such as reducing chemical fertilizer application, additional understanding of how animal waste-borne microbiomes could impact cropland and public health is needed to utilize the full potential of manure and to understand any consequential negative impacts of manure on cropland and environment. Elevated pathogen/pathogen indicator levels in surface and ground water and their potential linkages with animal waste have received considerable public attention because of associated public and animal health risks and produce contamination. In general, the use of fresh and untreated manure as fertilizers has a greater potential to increase pathogen loads in cropland, and subsequently these bacterial populations can be transported to rivers and streams during rainfall/runoff events. Furthermore, the use of untreated manure as fertilizer can facilitate the transfer of harmful bacterial population to ready-to-eat crop. To control the bacterial loads in manure used as fertilizer, grow rack several manure treatment practices such as composting, anaerobic lagoon systems, and anaerobic digestions are used.

Previous studies showed that pathogens such as E. coli, Salmonella, and Listeria in dairy manure are reduced through the application of these waste treatment processes, though the complete elimination of these pathogens by these processes is uncertain. Further, the existing knowledge is weak in terms of changes in microbial community in manure after various manure handling processes, such as solid-liquid separation, manure piling, and storage lagoon. In a typical, large California dairy, both liquid and solid manure are produced by flush manure management systems, which are common in California’s Central Valley. In such systems, a dairy barn is flushed with water, and flushed manure is passed through solid-manure separation systems, where liquid manure is separated from solid . Solid streams are stored in the form of piles, and liquid manure streams are stored in lagoon systems prior to the application of manure into the cropland as fertilizers. Although both lagoon systems and compost piles are used extensively to manage dairy manure in California, the efficacies and effectiveness of these manure handling processes for regulating microbial population are not well-understood. For these practical reasons that have considerable impact on agriculture, manure management in dairy farms, and its application in cropland, we hypothesized that microbial quality of dairy manure should change with on-farm manure handling/treatment processes. Moreover, this change in microbial population should be consistent from one farm to another. Such changes or shifts in microbial population of manure—and continuous use of that manure as fertilizer in a cropland for long period—have potential to impact the microbiome of cropland receiving manure as fertilizer. Therefore, the understanding of how the dominant bacterial community levels changes in typical dairy manure management practices in a farm environment is essential.Although numerous previous studies targeted investigation of the inactivation of selective bacterial pathogens such as E. coli, Salmonella, Listeria under specific conditions, these studies mainly focused on understanding of selective human pathogen inactivation in various treatment processes.

The insights of how various microbial populations at genus level change in particular processes are crucial, however, not well-reported. Further, having such information can help improve the currently-available manure management techniques, and support decision-making in terms of using manure as fertilizer in a specific cropland. Previous studies have used high-throughput microbial community profiling methods to gain insights into microbial community distribution in different environments. Amplicon-based community analysis has been used to determine the microbial communities in various samples, including food samples, anaerobic sludge, biosolids, natural environments, and agricultural grasslands. These methods have also been applied in raw dairy manure . However, the application of these methods to understand the microbial communities of manure fertilizers processed at various levels of treatment has not been explored yet. As a test of our hypothesis to determine the differences in microbiome of manure fertilizers, the goal of this study was to quantify the microbial population levels of various forms of dairy manure, such as liquid and solid produced in a typical dairy farm environment in California Central Valley. The objectives of the study are to: 1) determine the dominant microbial communities in solid and liquid forms of dairy manure fertilizer; and 2) understand the changes in microbiome of manure after solid-liquid separation, lagoon storage, and manure piling . We anticipate that the outcomes of this study will reveal greater understanding in terms of microbial quality of dairy manure fertilizers, and will help in making informed decisions. Further, improved insights will help in advancing dairy manure management, manure application, and understanding the environmental and public health risks associated with animal waste-borne microbial pathogens.The solid and liquid samples in dairy farms were collected from California Central Valley, which has the most densely populated dairy farms in California. Fig 2 shows county maps of Tulare, Glenn, and Merced, including the herd size in Tulare and Merced Counties.

For the current study, we collected 33 manure samples, which include solid and liquid manure. Solid manure samples were collected from manure piles located in dairy facilities, while liquid samples were collected from liquid manure storage ponds  as well as from flushed manure pits.Dairy facilities used for sample collections are located in three counties . From each dairy facility, we collected one liter of liquid manure sample in sterile bottles from each pond, and 600 g of solid manure in sterile bottles from each pile. Immediately after collection, samples were transported using a cooler and subsequently stored at -20˚C prior to analysis. For analysis, samples were thawed at room temperature. The solid samples collected from piles that were less than 2 weeks old were termed as Fresh Pile , while older piles were termed as Compost Pile . It is important to note that the studied CP does not necessarily mean the sample was subjected to standard composting processes, hydroponic rack where maintaining the thermophilic temperature and mixing is necessary. The liquid manure samples collected from Primary Lagoons and Secondary Lagoons were termed as PL and SL, respectively.The microbial diversity assessment of solid and liquid wastes using phylotype taxonomy resulted in a total taxa of 1818. Approximately 85% of 1818 taxa were classified at the genera level and 10% at the family level. In FP solid samples, sequence reads varied from 13,950 to 453,625 with an average of 153,316. In solid samples from CP pile, sequence reads varied from 15,798 to 1,092,032 with an average of 296,153. The average reads for FM were 333,450 with range from 5,242 to 989,040. In PL and SL liquid samples, the average sequence reads were 186,341 and 130,888 , respectively. FP samples, which were not dried and composted, showed the abundance of bacteria of genus Acinetobacter and Enterococcus; these bacteria were the most common and accounted for 3.5% – 39.53% and 4.8% – 11.86%, respectively. In CP samples, which were either dried or composted, the proportion of bacteria of genus Acinetobacter ranged from 18.3% to 19.2%. Other abundant species in CP samples were Flavobacteriaceae, Bacillaceae, Pseudoxanthomonas, Clostridia, and Sphingobacterium, accounting for 3.9–24.9%, 7.2–7.3%, 2.8–5.5%, 4.5–6%, 2.1–3.1%, respectively. Other abundant species in FP samples were Bacteriodetes, Trichococcus, Clostridiales, Flavobacterium, and Psychrobacter. A heat map of the top 50 taxa in total of 1818 taxa is shown in Fig 3. In FM samples, the most common species were Ruminococcaceae varying from 7.2 to 13.1%. Species such as Bacteroidetes and Clostridium varied 4.2–11.5% and 3.5– 9.7%. In lagoon samples , however, the most common species were Bacteroidetes, Flavobacteriaceae, and Psychrobacter accounting for 11.1–15.9%, 3.3–13.1%, and 19.3–29.6%, respectively. Dendrograms and PCA plots are shown in supplementary figures . The application of algorithm based on the abundance criteria resulted in 128 taxa, and the analysis of top 50 communities in 128 taxa is shown in Fig 4. The dendrogram and PCA of CP, FM, FP, PL, and SL are shown in Fig 4A and 4B, respectively. The heat map shows the distribution of microbial communities in CP, FM, FP, PL, and SL. In the dendrograms, the horizontal axis represents the distance of dissimilarity between clusters. The vertical axis represents objects and clusters. Results showed that FM is more similar to PL than SL. Furthermore, the similarity between FM and PL was greater than SL and CP. The frequency and abundance in 128 taxa of solid manure samples are shown in supplementary file , and these characteristics for liquid samples are shown in supplementary table .Sample grouping tendency in 128 taxa was evaluated using PCA. The PCA score plot shows a two-dimensional plot of 33 samples. The first two principal components explained 56.7% of the total variance in the microbial community composition. FP and CP samples were clustered together mostly in the upper and lower left corner of the plot, while FM, PL, and SL were clustered together in the lower left of the plot. As shown in the figure, CP and FP groups were similar to each other and distinct from PL, SL, and FM. Further, PL and SL were grouped together. The clear separation of CP and FP from PL, SL, and FM indicates that manure handling processes such as solid-liquid separation adapted in dairy farms have the potential to alter the microbial communities in the manure. Together, these results demonstrate that the forms of manure fertilizers will affect the microbial quality of manure fertilizers, which prove the central idea of our hypothesis.

The abundance of genera in each sample is shown in a heat map . In the heat map, the light blue indicates low abundance and dark red indicates high abundance. The results of top 50 genera present in samples indicate that species abundance differs among samples. InFP, the most abundant species such as Acinetobacter, Psychrobacter, and Enterococcus accounted for 9.9%, 2.5%, and 2.5%, respectively. The other unclassified bacteria in FP accounted for 31%. In CP, the most abundant species include Planifilum, Acinetobacter, and Flavobacteriaceae accounted for 6.4%, 6.0%, and 4.4%, respectively. The unclassified bacteria in CP were 25.6%.To understand the distinction among microbial communities in solid and liquid samples, the data of liquid samples and solid samples were grouped separately. The dendrogram plot showed clustering among solid and liquid samples . Results indicated that all the liquid samples were more similar to each other than solid samples. The PCA plot displayed that the solid samples were mostly clustered to the left side, while the liquid samples were clustered to the right side, indicating a clear separation among liquid and solid samples. The top 50 genera in 128 taxa are presented in heat map . Results showed that the top 22 species listed at the top of heat map were more abundant in solid samples compared to the liquid. These species include Bacteroidetes , Ruminococcaceae , Flavobacteriaceae , Clostridium , Cloacibacillus , Petrimonas , Psychrobacter , and Proteiniphilum . The bottom 28 species were more abundant in liquid samples compared to solid samples, and these microbial communities include Smithella , Pseudomonas , Sporobacter , Treponema , and Aminivibrio . Based on the canonical analysis , the genus Gp4, Nocardioides and Caryophanon were highly correlated with solid manure, while the genus Succiniclasticum, Porphyromonas, Methanospirillum, Anaeroplasma, Armatimonadetes, Eubacterium, Vampirovibrio, Anaerovorax and Lactonifactor, and the family Porphyromonadaceae were highly correlated with liquid manure . Canonical values from the discriminate analysis were also used to identify bacteria that were highly correlated and led to differentiation of FP vs CP, and FM vs LM . Bacteria genus Coraliomargarita was highly correlated with FP and genus Ruania and family Peptococcaceae were highly correlated with CP. From this analysis, we observed that the genus Bifidobacterium, Murdochiella, Nitrosomonas, Arcanobacterium, Gallicola, and Kurthia were highly correlated with FM. Overall, the comparison of FM and LM had a more similar microbial composition and diversity than the comparison of FP and CP.

Categorization of land types such as farms and forests was done manually using satellite imagery

Calibrating on the space utilization data of this study, such models could become more realistic in terms of transmission dynamics. They could provide more accurate and precise estimations to tackle infectious diseases cost-effectively. The study has several limitations. It was a pilot study and had a limited sample size. Most participants were adult males. Potential female participants said they rarely go beyond village boundaries and thus were not eligible to be included in the study. It may have introduced a selection bias, but it points to the fact that the mobility preferences between the two genders were too different that they were essentially two different populations requiring separate analyses. The most commonly reported occupation was farming and most people in this study area, indeed, farm for at least part of the year. However, people in the study area usually perform different types of work according to the season and assigning a single occupation to a person may not be appropriate. Employment in this region is almost entirely informal, and most working-age men will work in agriculture for part of the year and in other types of labor during other parts of the year. Responses to surveys about employment will therefore vary by the time of year, even within a single research participant. While we believe that this cohort is representative of adult males in this setting, more studies that are demographically representative of rural villages in this setting could be useful for understanding differences in travel patterns by age and gender. Mobile GPS devices have their own limitations. As explored in the Extended data: Figure S2, flood tray their readings could be inaccurate. Because of their small size, their battery capacity was limited. During the study period, participants may have failed to carry the GPS device . Mechanical failures may also cause problems in data collection.

Even though the utmost care was taken to preserve data integrity, there could be errors and bias from data collection or data manipulation. While the categories do match our authors’ understanding of the area, no validation was done on the ground after categorization for this analysis. Our estimation of home location as the median center of all the GPS points where the participant spent the night, each of which in turn is derived from the last GPS point of the day between 6pm to 12 midnight, may not be robust enough to capture the actual home location. This could be overcome by having the field supervisors record each participant’s home location with a GPS device in the future studies. Categorization of home area may be too wide to discern land use that is very close to home. Finally, the estimation of land utilization regardless of the method used is imperfect. Having two consecutive GPS points to constitute usage of the land area provide too crude a result . While the BRB method provide more accurate and precise estimates , it is not without its caveats. The BRB approach assumes that consecutive points that were more than three hours apart were uncorrelated. Since the GPS logger went into sleep mode while stationary, the current land utilization estimation under-estimates the time spent motionless and hence resulting in lower usage of home in Extended data: Figure S6 compared to that in Figure 2.Starting as the technologies of the rural electrification program, the electricity generating wind turbines has been developed to one of the biggest renewable energy power production facilities on the planet. Latest estimates from the International Renewable Energy Agency shows that onshore wind is already at grid parity with fossil fuel electricity .

According to latest reports of the Energy Information Administration of the U.S. Department of Energy and annual energy report of the European Commission, the levelized cost of energy for the onshore wind falls within a range of $0.04 – $0.10 per kWh, making them extremely cost-competitive with conventional power sources as coal, integrated gasification combined cycle and nuclear energy . Moreover, the wind energy is the fastest growing power production sector. To provide a comparison: during the period of 2000 to 2012, the installed capacity from nuclear power plants only increased by 9 GW, while the increase for wind power was 266 GW and around 100 GW for solar power plants. Further, the wind turbines technologies have the largest remaining cost reduction potential which can be achieved through advanced research and developments. During the last several decades engineers and scientists put significant effort into developing reliable and efficient wind turbines. Since the 1970’s most of the work focused on the development of horizontal-axis wind turbines . Vertical-axis wind turbines were generally considered as a promising alternative to HAWTs. Before the mid-90’s VAWTs were economically competitive with HAWTs for the same rated power. However, as the market demands for electric power grew, VAWTs were found to be less efficient than HAWTs for large-scale power production. In recent years the offshore wind energy are getting increased attention. The total global installed capacity of offshore wind reached 4.1GW at the end of 2011. Far from the shore energy can be harvested from stronger and more sustained winds. Also, the noise generation and visual impact is no more a limitations in turbine designs. In the offshore environments large-size HAWTs are at the leading edge. They are equipped with complicated pitch and yaw control mechanisms to keep the turbine in operation for wind velocities of variable magnitude and direction, such as wind gusts.

One of the most challenging offshore wind turbine designs is a floating wind turbine. Starting 2009, the practical feasibility and per-unit economics of deep-water, floating-turbine offshore wind was seen. The world’s first floating full-scale offshore wind turbine has been lunched in the North Sea off the coast of Norway by Norwegian energy giant StatoilHydro in 2009. The turbine, known as Hywind, rests upon a floating stand that is anchored to the seabed by three cables. Water and rocks are placed inside the stand to provide ballast. The world’s second full-scale floating wind turbine, named WindFloat, was designed by Principle Power and lunched in 2011 by the coast of Portugal. In 2013, as a part of US Department of Energy’s Wind Program, the VolturnUS, first offshore wind turbine in Americas was powered up to provide electricity. Later, the same year, Japan switched on the first floating turbine at a wind farm 20 kilometeres off the coast of Fukushima. Up-to-date there are many projects on building the floating wind turbine farms in Asia, Europe and Americas. Moreover, wind-energy technologies are maturing, and several studies were recently initiated that involve placing VAWTs off shore, such as DeepWind project by Riso DTU National Laboratory for Sustainable Energy and other. As the problem remains for large-scale wind turbines, especially offshore, with grid connection and energy storage, the urban areas, closer to direct consumer become very attractive. Recently VAWTs resurfaced as a good source of small-scale electric power for urban areas. There are two main configurations of VAWTs, employing the Savonius or Darrieus rotor types. The Darrieus configuration is a lift-driven turbine: The power is produced from the aerodynamic torque acting on the rotor. It is more efficient than the Savonius configuration, which is a drag-type design, 4×8 grow tray where the power is generated using momentum transfer. The main advantage of VAWTs over the HAWTsis their compact design. The generator and drive train components are located close to the ground, which allows for easier installation, maintenance and repair. Another advantage of VAWTs is that they are omidirectional , which obviates the need to include expensive yaw control mechanisms in their design. However, this brings up issues related to self-starting. The ability of VAWTs to self-start depends on the wind conditions as well as on airfoil designs employed. Studies in reported that a three-bladed H-type Darrieus rotor using a symmetric airfoil is able to self-start. In the author showed that significant atmospheric wind transients are required to complete the self-starting process for a fixed-blade Darieus turbine when it is initially positioned in a dead-band region defined as the region with the tip-speed-ratio values that result in negative net energy produced per cycle. Self-starting remains an open issue for VAWTs, and an additional starting system is often required for successful operation. As a wind power production demands grow, the wind energy research and development need to be enhanced with high-precision methods and tools. These include time-dependent, full-scale, complex-geometry advanced computational simulations at large-scale. Those, computational analysis of wind turbines, including fluid-structure interaction simulations at full scale is important for accurate and reliable modeling, as well as blade failure prediction and design optimization. Due to increased recent emphasis on renewable energy, and, in particular, wind energy, aerodynamics modeling and simulation of HAWTs in 3D has become a popular research activity. FSI modeling of HAWTs is less developed. Accurate and robust full-machine wind-turbine FSI simulations engender several significant challenges when it comes to modeling of the aerodynamics. In the near-tip region of the offshore wind turbine blades the flow Reynolds number is O, which results in fully-turbulent, wall-bounded flow. In order to accurately predict the blade aerodynamic loads in this regime, the numerical formulation must be stable and sufficiently accurate in the presence of thin, transitional turbulent boundary layers.

Recently, several studies were reported showing validation at full-scale against field-test data for medium size turbines, and demonstrating feasibility for application to larger-size offshore wind-turbine designs. However, 3D aerodynamics and FSI modeling ofVAWTs is lagging behind. The majority of the computations for VAWTs are reported in 2D, while a recent 3D simulation in employed a quasi-static representation of the air flow instead of solving the time-dependent problem. The aerodynamics and FSI computational challenges in VAWTs are different than in HAWTs due to the differences in their aerodynamic and structural design. Because the rotation axis is orthogonal to the wind direction, the wind-turbine blades experience rapid and large variations in the angle of attack resulting in an air flow that is constantly switching from being fully attached to being fully separated, even under steady wind and rotor speeds. This, in turn, leads to high-frequency and high-amplitude variations in the aerodynamic torque acting on the rotor, requiring finer mesh resolution and smaller time-step size for accurate simulation. VAWT blades are typically long and slender by design. The ratio of cord length to blade height is very low, requiring finer mesh resolution also in the blade height direction in order to avoid using high-aspect-ratio surface elements, and to better capture turbulent fluctuations in the boundary layer. High-fidelity modeling of the underlying aerodynamics requires a numerical formulation that properly accounts for this flow unsteadiness, and is valid for all flow regimes present. It is precisely this unsteady nature of the flow that creates significant challenges for the application of low-fidelity methods and tools to VAWTs. Another challenge is to represent how the turbulent flow features generated by the upstream blades affect the aerodynamics of the downstream blades. The VAWT simulation complexity is further increased when several VAWTs are operating in close proximity to one another. Due to their compact design, VAWTs are often placed in arrays with spacing that is a little over one diameter between the turbine towers. In [1], this type placement was found beneficial for increased energy production. When the FSI analysis of VAWTs is performed the simulation complexity is further increased. As can be seen in the flexibility in VAWTs does not come from the blades, which are practically rigid , but rather from the tower itself, and its connection to the rotor and ground. As a result, the main FSI challenge is to be able to simulate a spinning rotor that is mounted on a flexible tower.In order to account for addressed challenges, the FSI formulation should be robust, accurate and efficient for the targeted class of problems. The FSI framework used in current work was originally developed in [37, 38]. The aerodynamics formulation makes use of FEM-based moving-mesh ALE-VMS technique combined with weakly-enforced essential boundary conditions. The former acts as a turbulence model, while the latter relaxes the mesh size requirements in the boundary layer without sacrificing the solution accuracy.

The software runs in Linux operating systems and has several functionalities that are useful to the user

As discussed in Tennekes and Lumley , planar mixing layers are self-preserving so that mean velocity and turbulent stress profiles become invariant with respect to a horizontal distance . Hence, such profiles can be expressed in terms of local length and velocity scales. The self-preservation hypothesis is consistent with numerous observations that turbulent mixing close to the canopy–atmosphere interface exhibits a number of universal characteristics. Among these universal characteristics are the emergence of friction velocity and canopy height as representative velocity and length scales . In addition to exerting drag on the airflow aloft, canopy foliage acts as a source for many passive, active, and reactive scalar entities such as heat , water vapor , ozone , and isoprene resulting in a ‘‘scalar mixing layer.’’ A scalar mixing layer is a generalization of a thermal mixing layer, which is defined as a plane layer formed between two coflowing streams of same longitudinal velocity but different temperatures . Unfortunately, theoretical and laboratory investigations demonstrate that pure thermal mixing layers are not self-preserving . In fact, laboratory measurements of grid-generated turbulence in a thermal mixing layer by LaRue and Libby and Ma and Warhaft illustrate that computations utilizing self-preservation assumptions overestimate the maximum measured heat flux by more than 25%. Hence, instabilities responsible for momentum vertical transport may not transport scalars in an analogous manner. Given the complexities of mean scalar concentration profiles within forested systems , the dissimilarity in scalar and momentum sources and sinks, grow tray the large atmospheric stability variation close to the canopy–atmosphere interface, and the complex foliage distribution with height all suggest a need to further investigate the applicability of Raupach et al.’s ML analogy to scalar mass transport.

The objective of this study is to investigate whether the characteristics of active turbulence, typically identified from vertical velocity statistics, can be extended to mass transport at the canopy–atmosphere interface. In Raupach et al. , particular attention was devoted to eddy sizes responsible for the generation of coherency and spectral peaks in the vertical velocity. Here, eddy sizes responsible for cospectral peaks of scalar fluxes and their relationship to active turbulence are considered. Active turbulence is identified from orthogonal wavelet decomposition that concentrates much of the vertical velocity energy in few wavelet coefficients. The remaining wavelet coefficients, associated with ‘‘inactive,’’ ‘‘wake,’’ and ‘‘fine scale turbulence’’ are thresholded using a Lorentz wavelet approach advanced by Vidakovic and Katul and Vidakovic . Wavelet spectra and cospectra are also used to investigate the characteristics of active turbulence for the two stands and for a wide range of atmospheric stability conditions. Much of the flow statistics derived by Raupach et al. in the time domain are also extended to the wavelet domain. Since canopy sublayer turbulence is intermittent in the time domain with defined spectral properties in the Fourier domain, orthonormal wavelet decompositions permit a simultaneous time–frequency investigation of both flow characteristics.The use of utility All-Terrain Vehicles as working machines adds a heavy burden to the American public health system . According to data from the 2019 National Electronic Injury Surveillance System, over 95,000 emergency department visits were due to an ATV-related incident. Around 36.8 % of those ED visits involved youth younger than 18 years old, and 15.3 % of the incidents happened on farms or ranches . Indeed, using utility ATVs in the farm setting is extremely dangerous for youth; ATVs are one of the most frequently cited causes of incidents among farm youth .

ATVs have three or four low-pressure tires, narrow wheelbase, and high center of gravity . Due to safety concerns, the production of three-wheelers ceased in the United States in 1987 . Three-wheelers were known to be even more prone to rollovers than four-wheeled ATVs . Utility ATVs and sport models have several design differences. Utility models have higher ground clearance, stronger torque for hauling and towing, rear and front racks for carrying loads or mounting equipment, a hitch to pull implements, and heavyer weights . Accordingly, utility ATVs are more suitable and more commonly used for tasks in agricultural settings. Therefore, in this study, agricultural ATVs are defined as utility ATVs used on farms and ranches. Agricultural ATVs have heavy weights and fast speeds that require complex maneuvering. Youth’s physical capabilities may not be sufficient to perform those complex maneuvers correctly. In fact, many studies have shown that youth are more vulnerable to injuries than adults because of their less developed physical capabilities and psychological and behavioral characteristics , which likely affect their ability to safely operate agricultural vehicles . Furthermore, previous studies have shown that ATV-rider misfit is another important risk factor . Despite compelling evidence showing that utility ATVs are unsuitable for youth, the most popular guidelines for ATV-youth fit disregard the rider’s physical capabilities. Instead, those recommendations are based on the rider’s age , vehicle’s maximum speed , vehicle’s engine size , or farm machinery training certificate . For instance, youth as young as 14 can operate utility ATVs while employed on non-family-owned farms if they receive training through an accredited farm machinery safety program, such as the National Safe Tractor and Machinery Operation Program . The NSTMOP training includes tractor and ATV education, where students must pass a written knowledge exam and a functional skills test to receive a certificate .

Nevertheless, programs such as the NSTMOP lack appropriate coverage of specific ATV-related subjects, such as active riding and physical matches of ATVs and youth. If the ATV is not fit to the rider, they will likely be unable to properly operate the ATV’s controls, which increases their chance of incidents and consequently may lead to injuries and fatalities. In addition, the traditional guidelines adopted to fit ATVs for youth are inconsistent in evaluating their preparedness to ride. The suggested fitting criteria are subject to variances in state law and lack scientifically-based evidence. While some recommendations based upon the riders’ physical capabilities exist , the adoption of these recommendations has not gained attention because they are not comprehensive and lack quantitative and systematic data. Recommendations based on riders’ physical capabilities appear to provide a better foundation to determine if the machine is suitable for the rider . Therefore, there is a need to evaluate youth-ATV fit based on the riders’ physical capabilities . Since 95 % of all ATV-related fatalities involving youth between 1985 and 2009 included agricultural ATVs , the purpose of this study is to evaluate the mismatches between the operational requirements of utility ATVs and the anthropometric characteristics of youth. It has been hypothesized that youth are mainly involved in ATV incidents because they ride vehicles unfit for them. This study evaluated ergonomic inconsistencies between youth’s anthropometric measures and utility ATVs’ operational requirements. The ability of youth to safely operate ATVs was evaluated through computer simulations that comprised 11 fit criteria and male-and-female youth of varying ages and height percentiles operating 17 utility ATV models.Youth-ATV fit was analyzed through virtual simulations and was carried out in five steps. First, 11 guidelines were identified for the fit of youth and ATVs. The second step consisted of identifying a database containing anthropometric measures of youth of various ages , genders , and height percentiles . The third step consisted of collecting the dimensions of 17 ATV models to create a three dimensional representation of them. The fourth step consisted of using SAMMIE CAD and Matlab to evaluate if the youth’s anthropometric measures conform to the guidelines identified in step one. Lastly, hydroponic trays the results of the virtual simulations were validated in field tests with actual riders and ATVs.The fit criteria provide movement-restraint thresholds that check if the rider can safely reach all controls and perform active riding, which requires the operator to shift their center of gravity to maintain the vehicle’s stability, especially when turning or traveling on slopes . Maintaining a correct posture is essential because, otherwise, the rider’s ability to control the vehicle is compromised, which puts them and potential bystanders at risk. The reach criteria considered in this study were selected based on the recommendations of the following institutions: National 4-H Council , U.S. Consumer Product Safety Commission , Intermountain Primary Children’s Hospital , and Farm and Ranch eXtension in Safety and Health Community of Practice . Disregarding overlaps, these guidelines consisted of 11 anthropometric measures of fit, which are presented in Table 1.Human mockups were developed in SAMMIE CAD. This computer program allows users to create customized virtual humans based on eight anthropometric dimensions, as shown in Fig. 1a. In total, 54 youth mockups were created, a combination of two genders, nine ages , and three body size percentiles in height . The age range was selected because most youth start operating farm machinery at 8 years old , and most ATV-related crashes occur with riders younger than 16 years old . Two adult mockups were also created to establish a baseline for comparisons.

The anthropometric measures used as input to SAMMIE CAD were retrieved from the database of Snyder et al. , which includes measurements from 3,900 subjects from 2 to 18 years of age for both genders. The adopted anthropometric measures were based on the mean values of groups of subjects with the same age, gender, and height. One of the required inputs was not available in the database used for this study. Therefore, the missing input was computed using the available data. The seated shoulder height was calculated by subtracting the head and neck length from the seated height .In total, 17 utility ATV models were evaluated. Selected models consisted of vehicles of varying engine sizes from the most common ATV manufacturers on U.S. farms . General descriptive variables such as manufacturer, model, series, engine capacity , drive terrain , transmission, and suspension type were recorded. ATV mockups were developed based on the spatial coordinates of selected ATV features . An original attempt to record spatial coordinates of ATV features consisted of using Photogrammetry, a technique in which several pictures of an object are taken from various angles and then processed to create a 3-D model. Nevertheless, this technique proved inefficient, as initial trials were time-consuming, and the results had unsatisfactory accuracy. A second attempt consisted of using a virtual reality tracking system. This alternative proved fast to implement with excellent accuracy ; hence, this technique was selected and presented in the following section.The VR tracking system utilized in this experiment consisted of two controllers and two infrared laser emitter units . The system allows the user to move in 3-D space and use motion-tracked handheld controllers to interact with the environment. The system uses the lighthouses to shoot horizontal and vertical infrared laser sweeps that are detected by photodiodes positioned in the surrounding of the controller’s surface . The position and orientation of the controllers are calculated by the difference in time at which each photodiode is hit by the laser . By placing the controller over selected vertices of ATV features, it was possible to record their spatial coordinates, which allowed the development of the 3-D ATV mockups. A custom program was developed to calibrate the system, log, and manipulate data. This program was initially retrieved from Kreylos and then modified to meet the specific needs of the present study. Examples of these functionalities are a 3-D grid, which allows for real-time visualization of labeled points, and a measuring tool . A probe was custom-manufactured and attached to the controllers to ease the calibration process and data collection. The probe was made of metal and had a rounded tip, which made it wear-resistant and prevented it from damaging the ATVs. The measurements were collected inside a tent covered by a white rooftop that reduces the interference of solar rays in the communication between the lighthouses and the photodiodes in the controllers. In total, 38 points were collected per ATV. The points were selected aiming to get an efficient representation of all selected ATV controls and additional features that were used to assist the virtual simulations, such as the seat and the footrests. After data filtering, the data were processed in SAMMIE CAD for a 3-D representation of the evaluated vehicle, as shown in Fig. 2.ATV-rider fit was evaluated through SAMMIE CAD and Matlab. Fit criteria 4, 5, 6, 7, 8, 9, and 10 were evaluated in SAMMIE CAD because their assessment involved complex interactions between riders and ATVs, such as measuring the angle of the rider’s knee while riding.

Cannabis testing regulation is strict compared to tobacco, another inhalable crop

Using the R software and the zip codes of active licensed testing labs and distributors, we generate a map that shows the geographical location of licensed labs and distributors in mid-2019 . We have no information about which labs served which distributors. However, we expect that labs are better able to compete for nearby distributors because they would have lower transportation time and cost and may be more likely to have closer business relationships. In order to estimate average transport costs from distributors to labs, we randomly assigned distributors located within a 160 mi radius to each lab. Based on 2019 data, this was the longest travel distance from a distributor to the nearest lab. This travel distance radius ensures that each distributor in the sample is covered by at least one laboratory. Based on the annual number of samples that we estimate each lab is able to test , we estimate the share of total testing done by small labs, medium labs, and large labs. We then estimate the number of distributors per lab. In each of our 1,000 simulations, 70% of the 49 licensees with specific locations were randomly chosen to represent small-scale labs, 20% were randomly chosen to represent medium-scale labs, and 10% were randomly chosen to represent large-scale labs.The minimum capital investment in testing equipment needed to satisfy regulations is substantial. We estimate that in small labs , capital investment in equipment is about $1.1 million; in the medium-sized labs , capital investment in equipment is about $1.8 million; and in large-scale labs , capital investment in equipment is about $2.8 million. These capital costs, amortized over a 10-year time span with a 7.5% rate of depreciation and interest, vertical growing system represent less than 15% of total annual expenses. Annual costs of operating range from $1.4 to $2.2 million for small labs, $2.7 to $3.7 million for medium-sized labs, and $6.2 to $8.1 million for large labs. Consumables are the largest share of total annual costs in large-scale labs, whereas labor is the largest share of costs in small-scale labs. In medium-scale labs, consumables and labor have about equal shares of annual costs. Different-sized labs differ in their capacity and efficiency.

Large-scale labs test about four times the amount of cannabis per hour than medium labs, and more than 10 times what small labs test. The cost advantage of large testing labs comes from a more efficient use of inputs such as lab space, equipment, and labor. Table 6 summarizes the average of estimated testing capacities, annual costs, and testing cost per sample for each of the three lab size categories. Cost of collection, handling and transport also vary by lab size. As of April 2018, the longest distance between a lab and a distributor in California was about 156 miles. Fig 3 shows the cost of collection, handling, and transportation per sample for distances between labs and distributors, of less than 156 miles. As expected, the longer the distance, the higher the sampling cost. Large labs have relatively low sampling costs even at long distances. The highest possible sampling cost we assume for small labs is about $35 per sample if the distributor is located 156 miles away . On average, costs of collection, handling, and transportation represents a small share of total lab costs per sample. Fig 4 shows the distribution of full testing cost per sample from 1,000 Monte Carlo simulations assuming 49 labs. Variability of the cost per sample within small labs is high, with the highest and lowest cost within that group differing by $463. The difference between the highest and lowest costs in large labs is $88, with a lowest cost per sample of about $273. The average full cost per sample tested is about $313 for large labs, $537 for medium labs, and about $778 for small labs . Large cost differences per test and per batch document the large-scale economies and differences in operational efficiencies across labs of difference sizes. The aggregate amount of cannabis flowing through licensed labs in 2019 remains relatively small relative to the anticipated amounts expected in the future. That means labs that may anticipate growth, operate well below capacity.

Substantial scale economies suggest that, as the market settles, the smallest labs must either expand to use their capital investment more fully, leave the industry, or provide some specialized services to distributors that are not accounted for in the analysis presented here. Simply put, the average cost differences shown in Table 6 or the simulated ranges displayed in Fig 4 should not be understood as a long run equilibrium in the cannabis testing laboratory industry.In 2018, the first year of mandatory testing enforcement, according to official data published by the California Bureau of Cannabis Control and posted publicly on its website, failure rates in California averaged about 5.6% . Failure rates for the first seven months of 2019, the second year of the testing regime, have averaged 4.1% . We assume a 4% failure rate for the current market in California. By comparison, in Washington State, in 2017, the second year after the testing began, 8% of the total samples failed one or more tests. The Colorado Marijuana Enforcement Division reported that during the first six months of 2018, 8.9% of batches of adult-use cannabis failed testing, with infused edibles and microbial tests for flower accounting for the most failures. Batch size significantly affects the per-pound testing cost of cannabis marketed, especially when batch size is smaller than 10 pounds. Fig 5 shows the costs of one pound of cannabis marketed coming from different sizes of batch flowers using 0%, 4%, and 8% rejection rates. As rejection rates increase, the differences between the costs per pound of testing different batch sizes decreases. For example, given a 0% rejection rate, the cost of testing per pound of cannabis marketed from a one-pound batch is about 27 times higher than the cost of a 48-pound batch; on the other hand, given an 8% rejection rate, the cost of testing per pound of cannabis marketed from a one-pound batch size is only seven times higher than the cost from a 48-pound batch size.In this paper, we use a simulation model to estimate the costs per pound of mandatory cannabis testing in California.

To do this, we make assumptions about the cost structure and estimated the testing capabilities of labs in three different size categories, based on information collected from market participants across the supply chain. For each lab, we estimate testing cost per sample and its share, based on testing capacity, of California’s overall testing supply. We then estimate a weighted average of the cost per sample and translate that value into the cost per pound of cannabis that reaches the market.We use data-based assumptions about expected rejection rates in the first and second round of testing, pre-testing, and the remediation or processing of samples that fail testing. Our simulations rely on information collected from several sources, including direct information from testing labs in California, price quotes from companies that supply testing equipment, interviews with cannabis testing experts, data on testing outcomes for cannabis and other agricultural products from California and other states, data on pesticide detection in California crops, and data on average wholesale cannabis batch sizes. Costs needed to start a testing lab that meets California regulations depend on the scale of the lab. As lab scale rises, testing capacity rises faster than do input costs, so average costs fall with scale. We find that a large lab has four times the total costs of a small lab but 10 times the testing capacity, in part because large labs are able to use their resources more efficiently. Testing cost per pound of cannabis marketed is particularly sensitive to batch size, how to dry cannabis especially for batch sizes under 10 pounds. Testing labs report that batch size varies widely. The maximum batch size allowed in California is 50 pounds, but many batches are smaller than 15 pounds. We assume an eight-pound average batch size in the 2019 California market, but we expect that the average batch size will increase in the future as cultivators become larger and more efficient and take advantage of the opportunity to save on testing costs .Testing itself is costly, but losses inflicted by destroying cannabis that fails testing is a major component of overall costs. Low or zero tolerance levels for pesticide residues are the most demanding requirement, and result in the greatest share of safety compliance testing failures. Cannabis standards are very tight compared to those for food products in California. A significant share of tested samples from California crops have pesticide residues that would be over the tolerance levels established for California cannabis . Some foods that meet pesticide tolerance established by California EPA may be combined with dried cannabis flowers to generate processed cannabis products . Pesticide residues coming from the food inputs may generate detection levels of pesticide over the tolerance levels set by cannabis law and regulation, even if they are otherwise compliant as food products. Tobacco has no pesticide tolerance limits because it is considered to be an inedible crop used for recreational purposes. Cannabis has multiple pathways of intake, such as edibles, inhalable, patches, etc., and also may be prescribed for people with a health condition, searching for alternatives to traditional medicine. Some labs report that when samples barely fail one test, they have a policy of re-testing that sample to reduce the probability of false positives.

Some labs have reported up to 10% in variation in test results from the same sample. Some labs indicate that about 25% of samples need to be re-tested to be sure that results are accurate. Such concerns have been widely reported. In July 2018, some producers voluntarily recalled cannabis products after receiving inconsistent results of contaminant residues from different laboratories; and some California labs have also been sanctioned by the Bureau of Cannabis Control for failing state audits on pesticide residue tests. A major issue for legal, taxed and licensed cannabis market is competition with cannabis marketed through untaxed and unlicensed segment. Higher testing costs translate into higher prices in the licensed segment. Safety regulations and testing may improve the perceived safety and quality of cannabis in the licensed segment, thus adding value for some consumers. However, price-sensitive consumers move to the unlicensed segment when licensed cannabis gets too expensive. A useful avenue for further research is to investigate cannabis testing regulations and standards across states to assess implications for consumer and community well being and competition with unlicensed cannabis. Compared with other agricultural and food industries, the licensed cannabis industry in California has relatively little data. Banking is still done in cash, and sources of government financial data are less available for cannabis than they are for other industries. As the licensed cannabis segment develops, we expect that increased access to data on the market for testing services, including on prices, quantities, and batch sizes. Data from tax authorities, the track and-trace system, and the licensing system will then help clarify the costs and implications of mandatory cannabis testing.Industrial hemp ., has been an agronomically important crop since 2700 BC in China. Today, it serves a purpose in a variety of different industries, such as pharmaceuticals, nutraceuticals, textiles, composite materials, bio-fuels, foods, cosmetics, and hygiene products . Hemp is one of humanity’s earliest domesticated plants going back to the Neolithic times in parts of East Asia . Hemp is the non-psychoactive form of Cannabis, differentiated from marijuana only by having less than 0.3% tetrahydrocannabinol concentration in dry mass . In 1970, the Controlled Substances Act was passed in the United States, which stated that all Cannabis sativa, psychoactive or not, was a Schedule 1 drug with “high abuse potential with no accepted medical use; medications within this schedule may not be prescribed, dispensed, or administered” . The passage of the Controlled Substances Act forbade any individual from researching or growing Cannabis in any form, including hemp, and it was not until forty-eight years later with the passage of the 2018 Farm Bill that researchers and growers could again study and grow hemp. With 48 years of absence from the scientific literature, the renewed interest in hemp as a crop with high agronomic value has stimulated significant research activity.

Each boom and bust cycle in market prices thus results in a wave of foreclosures and farmland consolidation

One can see in this case that several of the candidate CPFs are underutilized . This underutilization indicates that these CPFs are not the best candidates for any of the TCs when there are other choices . However, it should be noted these CPFs may still be good candidates when only a limited number of CPF sites are available due to cost or other constraints.In order to evaluate the performance of the solver used to generate the results provided in Figure 12 and Table 14, the program is re-run while the integer constraint is relaxed. The goal is to assess if the same solution would be generated, and determine if performing the optimization without the integer constraint would be more efficient. The integer solution was once again generated using the Matlab function intlinprog and the default settings. The solution without integer constraints was generated using the Matlab function linprog with its default settings. It is noted that while the intlinprog function used a dual simplex algorithm, the linprog function defaulted to an integer-point-legacy algorithm. The results are included in Figure 13 through Figure 15. Figure 13 shows the difference between the number of transactions routed to each potential CPF location when the integer constraint is used, and when the integer constraint is relaxed. From Figure 13 it can be seen that the difference is negligible. The total number of transactions routed is the same using either solver, dry racks for weed if the number is rounded to the nearest integer value. Figure 14 shows the change in the final solution when run without integer constraints as a percentage of the original solutions with integer constraints. The maximum difference for any of the solutions negligible, indicating the solution quality is identical with and without the integer constraint. Finally, Figure 15 shows the percent increase in computational time for the solver without integer constraints over that with integer constraints.

The increase in computational time was typically between 50% and 100%. Given the fact that the solutions were equivalent and the computational time for the solver with integer constraints is less, the solver with integer constraints is used in all of the following analysis. Conventional industrial agriculture in the US displays a number of recognizable characteristics. First, industrial agriculture is capitalist agriculture: commercially oriented and profit motivated. This is especially true for California agriculture, which never witnessed any significant history of family-scale or “yeoman” farming . Second, relations among farms are characterized by competition to increase market share and return-on-investment by lowering costs and increasing productivity. Thus, third, industrial agriculture displays a drive to achieve greater economies of scale through the development and application of technologies in every dimension of production. Prominent among these are: bio-technologies to improve seeds through selective breeding and genetic engineering ; mechanization to reduce labor per unit output, in the field and in processing, storage and distribution ; chemical pesticides to manage insects, weeds, diseases and other pests ; application of organic and especially synthetic fertilizers to augment and replenish soil nutrients ; irrigation in areas of limited or irregular rainfall, drawing on groundwater resources and/or large investments in dams, aqueducts and canals ; and scientific research and development, both publicly and privately funded, to accelerate innovation in all of the above arenas. California has long been at the forefront in all of these technological frontiers, reflecting its unusually diverse, productive, and profitable agricultural sector. Acting together, these traits enable and perpetuate a set of complex dynamics that are widely, if unevenly, evident across the agricultural systems of the US and much of the developed world7 and increasingly evident in the developing world as well .

Market competition in agriculture, as compared with other sectors, tends to be particularly severe due to the large number of farms, their geographical dispersion, limited options for product differentiation, and the perishability of many crops, which inhibits farmers’ ability to choose when to sell. Pronounced consolidation among downstream buyers and processors of farm outputs, especially in recent decades, exacerbates these forces further . And as a primary component of real wages, cheaper consumption goods is structurally advantageous to capital in general. On-farm, the pressure to reduce labor costs is intense, leading to chronic–and often violent–struggles with farm laborers in sectors where mechanization has not made inroads, including large portions of California’s major agricultural regions . Meanwhile, the imperative to invest in the latest technologies to keep up with ever-increasing productivity induces heavy reliance on credit, even as the aggregate result is chronic production surpluses, which depress prices and narrow profit margins further . In the US, the average farm size increased across the 20th century, while midsized farms declined and reliance on off-farm income increased for most farm households . These dynamics give rise to two additional features of industrial agriculture that are particularly relevant to cannabis in California today, as legalization releases producers from the need to hide their operations in remote locations. The first is the tendency toward geographical differentiation, as farmers seek to match the crops they grow to the characteristics of specific places and locations–the soils, hydrology, climatic conditions, water supplies, and access to markets for labor, inputs and outputs, etc .

Differentiation can manifest at various scales depending on biophysical variation, from large regions such as the Corn and Wheat Belts of the Midwest, to micro-regional specialization in California’s Central Coast. With microregional specialization in the cannabis industry comes the potential for product differentiation via branding to enhance demand for products from particular regions . The second relevant feature of industrial agriculture is a chronic tendency, under the pressure of market competition, to over-exploit the ecological resources available at any given site, through extensification and intensification . State processes to govern cannabis production and trade may also create incentives for cannabis farmers to extensify and intensify production to transform the scale, location and environmental impacts of industry dynamics. Until 1996, cannabis production in California occurred outside of legal systems. In 1996, California voters approved the Compassionate Use Act , decriminalizing use and cultivation of cannabis for medical purposes . The State implemented the CUA in 2004 legislation8 but did not thoroughly address cultivation , leaving cities and counties to create and enforce guidelines individually, and thereby, unevenly . Illegal under federal law, production in California remained quasilegal . Federal enforcement efforts to eradicate cannabis production continued throughout this period for California farms, medical and otherwise, though it increasingly prioritized larger grows, especially ecologically-impactful ones on public lands . In 2015, California passed a package of bills comprehensively regulating medical cannabis production, distribution and use, known as the Medical Cannabis Regulation and Safety Act . In 2016, California voters passed the Adult Use of Marijuana Act , which legalized adult recreational use and production of cannabis , and regulation of recreational production began in January 2018. Though cannabis cultivation has a long history in the US , counter cultural migrants initiated more widespread production in the late 1960s . In line with “back-to-the-land” ethics , cultivation in Northern California at that time was primarily small, off-grid, without chemical inputs, and for non-market consumption. After adopting new horticultural techniques, acquiring seeds suited to US latitudes, and building consumer circuits, cultivators began producing more regularly for markets. Prices steadily increased through the early 1980s but spiked with the intensification of governmental eradication programs . Under full-throttle prohibition, higher rewards for cultivation incentivized risk-taking and more intensive, profit-focused, environmentally impactful production practices , such as indoor growing, “trespass” and public land cultivation, vertical farming pros and cons and increased violation of informal community norms around environmental care . Farm siting focused on secrecy to avoid enforcement action, directing cultivation to remote, rural watersheds on both public and private lands where plants were generally grown outdoors and sometimes associated with unpermitted water diversion, native vegetation clearing, pollution from pesticides and herbicides, or trash dumping associated with informal residences for temporary cultivator-guards . These locations, many of which were concentrated on California’s North Coast , overlap with areas of high conservation value, such as forested watersheds that harbor rare or U.S. Endangered Species Act-listed species like coho salmon, steel head trout, and Pacific fisher . Fear of detection placed informal limits on farm size, especially on private property where plants could be tied to owners and property seizure and arrest was a real threat . Further, price inflation due to prohibition also allowed some farmers to experiment with cost and labor intensive, ecologically conscious growing methods, including permacultural, organic, and pesticide-free techniques. Informal self-regulation on private lands also took place during this time, where some farmers abided by local norms surrounding farm size, location, and cultivation techniques to reduce incidence of detection by or conflict with neighbors .

The allowance of medical cannabis cultivation after 1996 affected production dynamics. Legal-medical protections, particularly physician recommendations, made it less risky to cultivate, thus fueling an expansion of the number of cultivators. Collective gardens enabled multiple patients to grow in one site rather than many or for growers to cultivate for multiple patients and distribute through medical dispensaries and collectives, thus increasing garden size. “Reasonable compensation” for medically designated cultivators made medical cultivation economically viable and, for some, made it possible to resist pressures to expand scale. Pressure to produce “medical-grade” cannabis, especially with the growing utilization of quality and safety testing, discouraged use of pesticides and encouraged organic inputs, though agricultural practices were far from uniform . All this said, there was significant mixing between medical and underground market production during this time period . Often, this meant cultivators used medical recommendations to produce large amounts of cannabis, only a portion of which would go to verified patients, thus alleviating pressure to produce medical-grade product. Between 2008-10, garden sizes increased, often to 99 plants, the limit at which federal minimum sentencing guidelines activated–an increase at least partly attributable to the Kelly decision, which struck down California’s efforts to limit plant numbers . Expanded production placed downward pressure on prices, resulting in price instability that only increased between 2012-16. Characterized as a “green rush”, farmers began producing significantly more cannabis in even larger gardens possibly in anticipation of full legalization and relatively less enforcement . For example, between 2012 and 2016 in Humboldt and Mendocino counties, the number of cannabis farms increased by 56%, the number of plants increased by 183%, and the total area under cultivation increased by 91%, with significant expansion happening in environmentally sensitive areas such as those on steep slopes or near creeks with salmon and steel head . These environmental concerns were a primary focus of state cultivation regulations and county ordinances passed since 2015 . In 2015, the state first began targeted regulatory programs of cultivation in Northern California, but comprehensive statewide regulation of production was not implemented until 2018 . Prior to 2018, the first farms seeking to enter a regulated cannabis industry in Northern California remained relatively small, largely due to county ordinances that restricted size . With implementation of adult use regulations in 2018, a statewide licensing program implemented by the California Department of Food and Agriculture opened opportunities for localities to develop cannabis ordinances and welcome–or ban–a new regulated cannabis industry. By early 2019, industrial scale commercial cannabis appears to have arrived in California. A cursory review of cultivation license data from the California Department of Food and Agriculture shows that large-scale operations have already been proposed or initiated in counties representing new frontiers for the cannabis industry . The emergence of these farms may have environmental implications, both directly through their local impacts, and indirectly as competitors for smaller operations on the North Coast. In some respects, cannabis cultivation regulations are substantially stricter than requirements for traditional agriculture. These regulations include mandatory summer water diversion forbearance , extensive site maintenance standards, exclusive use of organic amendments, and mandatory product testing with certified laboratories . Local regulations, such as zoning restrictions, are highly variable in requirements between jurisdictions and likely play a significant role in siting decisions for legal and illicit growers in the state . In traditional cannabis-producing communities, adherence to new production criteria can require overhaul of existing operations, limit new development, and also incentivize non-compliance .

FSU NORML had several campaigns and events during my two years at the school

By having a stake in the struggle I am writing about, I am following in a long line of social analysts who present an engaged view of the social problems they study. While taking a normative position on drug policy precludes me from any pretense of “values free” sociology, I do not recuse myself from the goal of presenting as objective a picture as possible of drug policy reform and medical marijuana. I first became conscious that people and organizations were seeking to reform cannabis laws in 1994. At that time, I had no idea that a wider drug policy reform movement existed. I attended a “hemp rally” in Lafayette Park in Washington, D.C. on the fourth of July, and was introduced to a loosely organized group of activists and speakers who had set up tables at the event. Activists were distributing literature, compiling mailing lists and talking to attendees. I was shocked that attendees were openly smoking cannabis within view of the White house. I was also shocked that somewhat formal looking organizations were in attendance. This small act of civil disobedience was remarkable to me for several reasons, it was collective, it was fun and I felt like I was part of something bigger than myself . The police did not arrest anyone, despite the rampant law breaking that was going on. During the event, attendees transformed cannabis smoking from a private act of criminality to a public statement of defiance. This experience opened my eyes to the political dimensions of drug use and to the existence of a collective challenge to drug policy. While attending college at the University of Virginia, how to dry cannabis my consciousness of the political ramifications of drug use and drug policy expanded greatly. I went to National Organization for the Reform of Marijuana Laws meetings and learned about the consequences of drug use and policy from the experiences of several friends.

During the year before I arrived at the school, the DEA had conducted a joint operation with local, state and university police that targeted LSD users on campus. In a series of sting operations, undercover police purchased LSD from several college students. After meeting some of the young men that had been sent to prison and another friend who was arrested for possession of ecstasy, I became increasingly enraged that my otherwise law abiding friends had served prison time for using psychedelic drugs to explore their own consciousnesses. When I attended several Grateful Dead shows in the spring of 1995, I witnessed the DEA’s efforts to arrest people for LSD and read about sting operations in Rolling Stone and the local papers in the cities where the Grateful Dead was playing. The highly criminal status of cannabis, ecstasy, and LSD was a puzzle to me, one that continues to motivate my efforts to understand drug policy and how it changes. As a graduate student in Criminology and Criminal Justice, I joined the Florida State University chapter of NORML. As a member of this active chapter of NORML, I became familiar with the variety of tactics and approaches that drug policy reform organizations use. One area of concern for our chapter was a 1998 law that denied financial aid benefits to college students who had been convicted of drug offenses. One founding member of FSU NORML, Chris Mulligan, went on to found an organization called the Coalition for Higher Education Act Reform that focused exclusively on changing this law. The chapter was very active and had success with outreach. After forming the first NORML chapter at a public university in the state, it helped to found NORML chapters at many other public colleges in the state, including the University of Central Florida, Florida Atlantic University, and the University of South Florida.

Additionally, the Florida State University chapter served as the launching platform for the non-college affiliated chapter, Florida NORML. In 2002, our chapter attempted to pass a city level initiative that would make marijuana law the lowest law enforcement priority in Tallahassee. Similar initiatives have been passed in several other cities across the country. Most notably, Ann Arbor, Michigan was the first city to pass such a measure in 1973. The city of Berkeley, California passed similar measures in 1972 and 1978. Although numerous states passed decriminalization bills in the 1970s, city level initiatives were largely abandoned until the late 1990s, and not used in earnest until the early years of the 2000s. Trying to get such a measure on the ballot in Tallahassee, Florida, however, was an entirely different prospect. Unlike California and Michigan, Florida has been one of the least progressive states with regard to drug policy. Although our group gathered the requisite signatures to get the initiative on the ballot, and worked with an attorney to insure that the initiative would not violate the city’s constitution, the hostile city attorney single-handedly quashed the measure, on the grounds that it violated the city constitution. Our chapter also gathered signatures for a ballot measure that would have made marijuana the lowest law enforcement priority for the city of Tallahassee. Although we obtained the proper number of signatures, the City Attorney quashed the ballot measure on a legal technicality. This was my first experience of the state acting to shut down a legally available avenue to drug policy reform. Despite this setback, our chapter would persevere and have success on other fronts. We organized two campus “hemp rallies” that featured numerous speakers in the marijuana law reform movement, tables staffed by representatives from various organizations, and musicians. One symbolically significant action occurred at a community, “town hall” style meeting, entitled “United We Stand Against Drugs.” The meeting’s organizers presented at as a panel discussion and community forum.

Additionally, it was a recruitment event for the Drug Enforcement Agency and local law enforcement agencies. While it was promoted as a community forum with a panel of experts, it was essentially a well-orchestrated public relations event for law enforcement and the continuation of a prohibitionist approach to drug policy. I became aware of the event after reading a placard touting the event as a D.E.A. recruitment event in the lobby of the School of Criminology and Criminal Justice. I notified several NORML members and about ten of us were able to attend. We dressed well for the event and planned to blend into the crowd, be dutifully polite, and then ask incisive questions that would undermine the positions that were put forth by the panel and its emcees. The event featured both a structured panel discussion with an attendee question and answer session, tables staffed by D.E.A. recruiters, and refreshments. Two local T.V. personalities served as the event’s emcees. The panel was a veritable who’s who of Florida’s drug warriors with two treatment workers thrown in to give the appearance that the fight against drugs wasn’t exclusively law enforcement’s battle. The panel consisted of then-DEA head Asa Hutchinson, Florida’s state drug czar , the Tallahassee Chief of Police, the Leon County Sheriff, and the FSU Chief of Police. Outside the meeting room, several D.E.A. agents were staffing tables featuring promotional displays for the agency and handing out D.E.A. memorabilia including highlighters, flashlight key chains and pens. One table that was put together by the Tallahassee police displayed a city map of Tallahassee featuring red dots to mark each drug related arrest in the city. Not surprisingly, the vast majority of the dots were covering Tallahassee’s racially segregated “Frenchtown” neighborhood on the map. I took some pictures of the display and pointed out the apparent racial disparity in arrest practices to some of my fellow NORML activists. I also noted the apparent racial disparity to the police officer staffing the table. It soon became apparent that our group of well dressed and well scrubbed university students were not there to join the D.E.A. or the police, but to challenge the official line that they sought to present. After we left the T.P.D. table, best way to dry cannabis we visited some of the D.E.A. tables and soon noticed that several suit-wearing individuals were watching and photographing us in a not too clandestine manner. We presumed that these people worked for the D.E.A., but were not dissuaded from going inside the event. After visiting some D.E.A. tables, I noticed that the police had removed the large folding map of the city . It was a made for T.V. event, but I doubt its promoters had any idea what kind of T.V. they were in for prior to our arrival. Inside the well-lit meeting room, the event’s organizers had set up a dais for the panel discussants. The room also featured a video screen, and several staffed T.V. cameras. Our group of activists separated and sat scattered throughout the room. During the panel presentation, the movie screening and the beginning of the question and answer session, we all remained dutifully silent and respectful. Separately, we raised our hands and got in line to ask questions of the panel. When I got my chance to speak I took the microphone from the emcee and began to read severally carefully selected points from a one-page fact sheet produced by the SMO The Sentencing Project. I highlighted the facts that we had the largest prison population of any nation, our punitive drug policy had contributed to the huge prison population, and ethnic minorities accounted for the vast majority of drug violation prisoners. While I was speaking I became very animated and visibly angry. It was very empowering to be able to look the men responsible for carrying out the drug war in the eye, and to decry the many hidden consequences of our drug policy in a public forum.

I was fairly articulate yet animated too. We had infiltrated a carefully orchestrated public relations event organized by various members of the drug control industry and done our best to expose the negative consequences of drug prohibition. This action made for great television and the broadcast was played repeatedly on the local public access channel. By the time we left, we had been photographed numerous times by DEA agents, which we took as indicative of our success. Little did I know at the time, my performance would make me somewhat of a local celebrity. In the months after the event, numerous strangers would stop me in the supermarket and say that they had seen me on T.V. with an approving smile. This action solidified my resolve to challenge drug policy. The cavalier reaction of the panelists to our challenges and the attempt to intimidate us by D.E.A. agents served to strengthen my resolve to continue working for drug policy change. I have organized the dissertation into six chapters and a brief conclusion. Although the six chapters fit together to detail the pre-history and history of medical marijuana in California, they are also intended to be independent analyses of different aspects of drug policy reform. Consequently each chapter uses different theoretical lenses, samples of relevant literature and combinations of research methods to seek answers to diverse research questions. The six chapters link together to first situate my narrative of medical marijuana within the historical contexts of drug prohibition and drug policy reform. In the first three chapters I provide an analysis of drug prohibition, the history of the movement, and the spatial and organizational diffusion of drug policy reform. In the final three chapters, I analyze the medical marijuana movement in California as a case study of the wider movement’s biggest success. A major goal of the dissertation is to provide a social history of both the wider drug policy reform movement and the more focused medical marijuana branch of the movement. To my knowledge, this social history has not been written before, and narrating it with fidelity was both challenging and rewarding. It is my hope that each chapter is able to stand independently from the larger work, but that they are integrated to compose a richly contextualized and detailed narrative. In addition to contributing to the sociology of social movements and the sociology of drugs, providing the social history of the drug policy reform movement is an important product of my research. In chapter one, I seek to provide historical context for my study.

A solution to these factors for potential Pacific coast shrimp farming is to culture a local species

Watercress is traditionally grown in outdoor aquatic systems, but there is increasing interest in its suitability for indoor hydroponic systems, such as in vertical farms . VF utilize hydroponic or aeroponic systems that allow plant stacking in multiple vertical or horizontal layers increasing the effective use of space and other resources, particularly water . Indoor vertical agriculture is well-suited to the production of leafy greens. Their fast growth rate, high harvest index, low photosynthetic energy demand and compact shape make them ideal for indoor farming technologies . VFs have multi-layered indoor crop production space with the use of artificial lights and soilless cultivation systems. With the capacity to control lighting, ventilation, irrigation, nutrient levels, and abiotic stress, VFs offer the potential of high and predictable yields and uniform produce alongside reduced water use and often no pesticide applications whatsoever . The future of indoor food production is likely to include other high-value horticultural crops such as several leafy greens, culinary herbs, strawberries, and flowers. Breeding targets for these crops include short life cycles, low energy demands, improved yield , small root systems, as well novel sensory and nutritional profiles . VF systems are gaining traction for commercial scale cultivation, partly due to their ability to deliver locally-grown food to urban areas, with lower environmental costs and also to deliver food in locations where fresh produce cannot be easily grown . These systems also offer a unique opportunity to tailor crop characteristics to changing consumer preferences by altering environmental conditions such as light quality for example , grow bench where blue light has been used to increase the glucosinolate content of several Brassica species including, pak choi and watercress .

Here we investigate differences in yield, morphology, and glucosinolate content of watercress grown under three different cultivation systems . This research provides foundation information to suggest that high yield watercress crop production is possible in vertical farming systems and that watercress quality may be further enhanced for improved anti-cancer characteristics. We have shown that the quality and yield of the leafy green salad crop watercress can be significantly improved by growth in an indoor vertical hydroponic system, enriched in blue light. The CDC ranked watercress as the most nutrient dense crop based on the content of 17 nutrients that are associated with reducing chronic disease risk . Our results show the yield and nutrient content of watercress can be enhanced even further by utilizing a novel vertical indoor growing environment other than the current commercial system used in the UK. Yield increases may be explained by the ability to tightly control environmental conditions in the VF that generate a consistent optimal nutrient and temperature environment. The increase in glucosinolate content from UK to CA is probably explained by heat stress in CA, with the maximum temperature recorded at the CA site at 43.8 ◦C compared to 30.9 ◦C for the UK. Glucosinolate accumulation is associated with improved heat and drought stress tolerance in Arabidopsis and increases in GLSs are observed in heat-stressed Brassica rapa . Increases observed in GLS content in VF can be explained by prolonged blue light exposure and a longer growth period . The mechanism of different LEDs on GLS biosynthesis regulations still remain unclear, but a short-duration blue light photoperiod increased the total aliphatic GLSs in broccoli . A similar result from a genome wide association mapping of Arabidopsis also revealed that blue light controlled GLS accumulation by altering the PHOT1/PHOT2 blue light receptors .

Increasing blue light in the VF increased total GLSs content and although not statistically significant, it confirms the study by Chen et al. that showed increased GLSs content with increased blue light. Rosa et al. showed that GLS concentrations are more sensitive to the effect of temperature than of photoperiod and this is consistent with our results in total GLSs between the UK and CA sites. Our results support the idea that indoor farm cultivation is effective in promoting health-beneficial chemical properties. Watercress produced PBGLS in both the VF treatments, but this compound was not detected in either the UK or CA trials. PBGLS strengthens the nutrient profile of watercress. PEITC derived from PEGLS has already been proven to be an extremely effective naturally-occurring dietary isothiocyanates against cancer . Inhibitory potency increases several-fold when the glucosinolate alkyl chain gets longer , suggesting that PBITC, with its elongated alkyl chain compared to PEITC, may contribute an additional health benefit to this super food, although this remains to be proven. It is evident that watercress is particularly well-suited for indoor hydroponic growing systems, where plants exhibited the highest yielding leafy growth with improved nutritional profiles, ideal for consumer preferences. Altering the blue:red light ratio may further enhance the anti-cancer properties of this highly nutritious salad crop, but further studies are required to hone the light recipe for indoor cultivation. The premise of this study is that an increasing number of the world’s fisheries are producing or exceeding their maximum yield, while the world demand for seafood increases. Global per capita seafood consumption has increased steadily from 9.9 kg in the 1960s to 19.2 kg per capita in 2012 . This skyrocketing demand in conjunction with population growth and increased fishing efficiency has led to over exploitation of many marine fish stocks. Technological advancements have made accessible areas that were once too remote or too deep to be exploited. Commercial fishing involves deploying hundreds of miles of nets and dragging various apparatus along bottom habitats.

A side effect of this is environmental damage throughout ocean ecosystems, much of which is unobservable and immeasurable . Fishery management authorities have started adopting ecosystem-based management approaches, understanding that fish populations depend upon habitat integrity . Many fisheries stipulate gear restrictions and limited access, but enforcement, efficacy, and consideration of economic and social factors all vary on a case-by-case basis. Despite increased efficiency, fleet size, and access, wild capture fisheries’ annual production has stabilized to 1990 levels, varying up and down about three percent since 1998 . The relative consistency of wild catch over the past two decades, accompanied by the periodic dramatic stock collapse, such as the anchoveta crisis in 1998 and today’s California sardine fishery closure, suggests wild-capture marine food fish production may be at capacity. Yet to date, seafood production has risen to meet demand, outpacing world population growth twofold in annual growth rates since the 2000s. This has been made possible by the aquaculture industry, which has been growing rapidly in the past few decades: aquaculture contributed to 5 percent of seafood production in 1962, and an impressive 49 percent in 2012 . From some perspectives, aquaculture is a means to contribute to global food security while alleviating pressure on wild stocks and preventing environmental damage from impactful fishing gear. But to others, farmed seafood comes with its own variety of health and environmental risks, and is neither an adequate nor sustainable substitute for its wild counterpart.In regards to U.S. seafood consumption, plant nursery benches shrimp is the most consumed product, weighing in at 1.9 kg per year consumed by the average American . Despite this popularity, we remain dependent upon foreign production for upwards of 90% of shrimp products. In 2015 the U.S. imported almost 1.3 billion pounds of shrimp, valued at over $5.4B . The aquaculture industry continues expand, and import data prove shrimp is a top priority for the U.S. However, ecosystem-based assessments of commercial fisheries particularly malign shrimp fisheries. The primary gear type for shrimp fisheries is trawl gear . Certain types of trawls earn the highest rank among fishing gear in terms of physical and biological habitat damage . Also, trawling for small species leads to massive amounts of bycatch: roughly five pounds of non-target species per pound of shrimp in the U.S. fisheries. Some bycatch isretained, but the global average discard rate for all shrimp trawl fisheries is more than 62 percent, over twice the rate of any other fishery . When shrimp farming first became profitable in the 1970s, it was lauded by some as a ‘Blue Revolution’, a way to avoid the environmental havoc described above. However, rapid, unregulated expansion of intensive level fish farms earned farmed seafood a reputation of being unhygienic and environmentally destructive in its own ways. Low survival rates, disease outbreaks, concentrated waste effluent, and undesirable feed ingredients soon disillusioned environmentalist support . Over the decades though, aquaculture technology has evolved considerably, resulting in sustainable feed alternatives, the ability to reduce waste, and produce more efficient, cleaner products overall. At least in countries with effective regulation. The majority of our current imports come from penaeid shrimp farms in India, Indonesia, and Ecuador ; countries with less stringent health and environmental standards than those of the U.S. One way to meet the growing domestic demand for shrimp, as well as ensure environmental integrity, is to produce our own. Marine shrimp aquaculture exists in the United States, but import statistics show that domestic products constitute a negligible amount of our annual consumption. Researchers in the 1970s looked into various shrimp species for farming along the Pacific coast, but studies were abandoned as it proved far cheaper at the time to get products from abroad and shrimp farming became dominated by warm water species .

Currently, people are becoming more cognizant of the origins and environmental impacts of their food. A locally-farmed shrimp could reduce the environmental footprint of long-distance imports; provide a fresher product to the consumer; and reduce ecosystem damage resulting from farming and fishing practices in unregulated regions. Major concerns and opposition regarding fish and shellfish farming include the risk of escape and subsequent introduction of an invasive species or pathogens. The spot prawn is native to the North Pacific and to this point has never been utilized as a commercial aquaculture species. There is an active wild capture fishery for spot prawn in California, Washington, Oregon, Alaska, and Canada. The California fishery is most active between Santa Cruz and San Diego, averaging 250,000 pounds per year. Only pots are used, as trawling for spot prawn is prohibited in all state waters. The fishery is regarded as relatively sustainable due to its small, limited access , closure during peak spawning months, and the ban of trawling . However, California spot prawn earns only a “good alternative” score from Monterey Bay Aquarium’s Seafood Watch due to potential damage to seafloor habitats caused by the traps . Furthermore, no surveys are conducted to estimate or monitor population abundance, and the bycatch to target ratio was only monitored during the 2000-2001 season where it was found to be 1:1 in the south and 2:1 in the north . Stable catch, limited access, and gear restrictions may indicate a well-managed fishery, but in reality, much of the spot prawn population health is unknown. Live spot prawns can reach $24 per pound ex-vessel price and $30 per pound at markets due to their large size – sometimes six shrimp to a pound. In Japanese restaurants the large, cold-water shrimp is known as amaebi , a high-end sushi item. Stateside Asian marketplaces are the primary consumers for California spot prawn, while the bigger fisheries in Alaska and British Columbia export a significant percentage of their landings to Japan or global sushi markets . Farming P. platycerosis not a call to derail the wild-capture fishery, but a suggestion that supplementing this seasonal fishery with a farmed option may be a prudent way to support local industry and avoid increasing ecosystem stress or competition on the water. In 1970, Price and Chew of the University of Washington Fisheries Research Institute undertook the first laboratory rearing of P. platyceros. Until this study, the only descriptions of larval stages were drawn from plankton samples in the 1930s. The culmination of their study is the definitive morphological guide to spot prawn development through stage IX. Price & Chew caught ovigerous females in Washington and reared larvae from the females and from loose eggs that had detached during transport. Loose eggs were kept suspended on a screen in a unique recirculating system with 10µ-filtered, aerated, UV-sterilized saltwater. In this setting, eggs could last up to sixty days with no fungus growth. There is no comment as to when the detached eggs hatched in relation to the eggs carried by females, but both hatched successfully. It took females 7-10 days to release all of their progeny once hatching began.

The location of the pivot point at half of the bucket height also reduces the amount of weight the users handle

Therefore, when the term “children” is used in the context of agricultural labor, the implication is that the parents oversee the welfare and legal circumstances related to this working yet “dependent” population. “Youths” and “adolescents” are usually used interchangeably, and according to the World Health Organization , refer to the period of transition from childhood to adulthood, commonly between the ages of 10 and 19 . Hence, in this paper, as in most of the related literature, the terms children, youths and adolescents will be used interchangeably, and will refer to the age range of 10–17, unless otherwise specified; whereas, the term “adult” will refer to the age of 18 and above. Regardless of the hazardous nature of farm-related activities, in agriculture, age does not limit participation and children may do the work of an adult . Youths 13 to 15 years old are usually expected to do pretty much what adults can do . Additionally, the youth-involved farm-related activities are highly correspondent to geographical regions and commodity types . The same study indicates that, in the U.S., Midwestern youths are primarily assigned to animal care and farm maintenance jobs , compared to Western youths, whose tasks are mainly crop management . Every year in the U.S., approximately 126,000 hired youth farm workers aged 14–17 years of age are employed in crop agriculture . These hired youth farm workers, especially the ones who work on non-family owned farms, commercial plant racks are generally involved in harvesting/picking tasks . Studies have documented harvesting tasks as being associated with potential LBD risk factors and back pain reports . In addition, All read and colleagues have investigated the magnitude of LBD risk that youth are exposed to while performing tasks that are routinely performed by Midwestern farm youths.

They have quantitatively measured the trunk kinematics of these youth workers as well as workplace factors while the workers are performing 41 manual materials handling tasks and found that the associated LBD risks of some tasks were comparable to that of industrial jobs with high LBD risks. Out of the 41 evaluated tasks, seven tasks are placed in the high LBD risk category, corresponding to the LBD risks found in industrial jobs, and 24 tasks are placed in the middle risk category. Work-related injuries among youth workers in agricultural settings are a serious problem. The estimates of annual farm-related non-fatal injuries range from 1,700 to 1,800 per hundred thousand child farm residents . Youths holding farm jobs simultaneously with non-farm jobs have a significantly higher proportion of injuries, of which sprain and strain are some of the most common types . In addition, muscle aches and strains of the back, shoulder, and other joints, are described as everyday occurrences among youths working on farms . A study focusing on youth workers in Wisconsin fresh markets also revealed that over half of the youth workers reported experiencing low back discomfort, while 25% reported disabling discomfort . Intervention studies with the specific aim of reducing LBD risk factors associated with tasks performed by working farm youths have been somehow limited in the literature, with few notable exceptions . Hence, the purpose of this study is to introduce and evaluate two interventions for bucket handling on farms. The two interventions and their development are firstly presented, followed by two evaluation phases; “Phase 1” is an intervention evaluation with adult volunteers, and “Phase 2” consisted of a confirmatory evaluation with youth volunteers from a local high school. The evaluation approach focused on the effectiveness of these two interventions in reducing LBD risk during the lifting, carrying, and dumping of water buckets.

Subjective responses were also obtained during the two testing phases. The job of handling water/feed buckets entails three main tasks: 1) lifting the bucket, 2) carrying the bucket to the destination and 3) dumping the content of the bucket at the destination . This job is commonly performed by youths on farms , where they transport water or feed from a source, such as water pump or a barn, to animal feeding containers. The objective of this phase of the study was to develop tools that are expected to reduce LBD related risk factors foryouths performing manual handling of water/feed buckets. The design approach was to develop two tools; one to simultaneously address the carrying and dumping tasks, and another to address the lifting task. In setting the tools design criteria, the research team relied on existing agricultural, bio-mechanical and ergonomic literature and guidelines, and consulted with colleagues and designers who are themselves farmers and grew up and worked on farms during their youth, and were familiar with the requirements and conditions surrounding bucket handling on farms. Kepner-Tregoe decision analysis was performed to select the design of the major components for each task: the lifting aid, the carrying device, and the dumping mechanism . Constraints and criteria used in the analysis are described. Based on the brainstormed design ideas, prototypes were built for testing purposes. Prototype testing results were then applied in the KT analysis for design comparisons and selection. The analysis results are presented in the following section for each of the three tasks. Carrying—A wheeled design was chosen for the carrying method. The main decision was the number of wheels employed in the design. The type and size of the wheels were determined based on other criteria and are discussed below. Three options– two-wheel, three-wheel, and four-wheel– were evaluated and compared using the KT analysis.

Two-wheel design- In order to use a two-wheel design, the user has to bear partial weight of the handled object . This design approach is used extensively on farms. Three-wheel design- A three wheel design can support the full weight of object handled. Also, it allows the user to control the cart by maneuvering the pushing handles . However, during field testing, this prototype design tended to commonly tip over on dirt roads. Four-wheel design- This design can also support the full weight of the handled object. Additional handles are required at the rear side of the cart, since the rear wheels prevent the user from reaching the handles in the front . In addition, the design exhibited greater stability on dirt roads than the three-wheel design. The results of the KT Analysis are shown in Table 1. A four-wheel design met all the design constraints and was pursued for in the final design. Dumping—Three dumping mechanisms were analyzed and compared using KT analysis. Table 2 summarized the analysis results. Type A- While the bucket is hanging in a frame that has a pivot point fixed to the carrier, users can dump by tilting the frame. The activation force is greatly reduced because of the difference in the acting moment arms . The moment arm for the activation force is adjusted to be three times the moment arm associated with bucket weight , ebb and flow tray with MA1 equals to approximately the radius of a five-gallon bucket . Type B- An air cylinder aided tilting mechanism that utilizes an air actuator to reduce the force required for the dumping process. An electric-powered compressor is required as the source of compressed air. Type C- Rails on both sides of the bucket with wheels mounted on its sides were considered. Dumping is completed by sliding the bucket down along the rails. Comparing the three mechanism types, Type A was selected as the option fulfilling all the desired design constraints and most of the design criteria . Lifting—The basic design for the lifting aid, a rod with a handle on one side and a hook at the other, allows the users to reach the handle of the bucket without bending down. However, the selection decision depended on the lifting mechanism, for which three types of mechanisms were compared . Table 3 shows the KT analysis results for this comparison.

Based on these results, the two-handed operation was selected over the other two mechanisms. Design Specifications and Modifications—Based on all the KT analyses presented above for each task, a carrier with four wheels was deemed best for the carrying task; a tilting mechanism which users operate at the same position as they push the cart was best for the dumping task; and a two-handed tool was best for the lifting task. For carrying and dumping tasks, an intervention, namely Ergonomic Bucket Carrier , was developed. For the lifting tasks, another intervention, called Easy Lift , was constructed. The dimensions of the prototypes of the EBC and EL were based on the anthropometric data of youths between ages 12.5 and 17.5 and environmental factors associated with dumping/carrying/lifting of water/feed buckets . The dimensions of several parts of the EBC, including the height of the pushing handle and the height of the bucket stand, were made adjustable to match the users’ anthropometrics and needs. Ergo Bucket Carrier —The final design of the EBC is shown in Figures 6 and 7. To use the device, the user first loads the bucket to the EBC, pushes the device to the destination, and then activates the dumping device by pushing a handle . The wheels were selected based on commercial availability, outdoor road conditions, and price. Pneumatic wheels that are less than 15 inches in diameter were selected to meet outdoor road conditions and to provide close contact with the destined container. The wheels selected for the front-end were fixed, pneumatic wheels with a 14 inch diameter; whereas, for the rear-end they were swivel, pneumatic wheels with a 10 inch diameter. Minor changes were made to the original designs to improve usability and performance. For example, the position of the handle for activating the dumping mechanism was changed from vertical to horizontal so that the users could activate the dumping mechanism at the same position as they were in to push the EBC, which would facilitate the efficiency of the process . The length of the handle remained unchanged and so did the moment arms and related forces. In addition, the positions of the cart handles were altered with an outward angle so that users’ wrists could remain in neutral position . Easy Lift —The final design of the EL is shown in Figure 8. A power grip design is used on the grip handle. The power grip was an angled handle that kept the users’ wrist in a neutral position so that the user can utilize her/his maximum power grip’s strength . The spinal loads associated with the use of EL are also expected to be less compared to manual lifting of the bucket due to the anticipated reduction in forward flexion and spinal moment arms . During lifting and carrying, the user hooks the EL’s U-shaped hook to the bucket’s handle to lift and carry the bucket . For dumping, the user sets the bucket on the floor, rotates the EL to hook a long screw to the bottom of the bucket, and uses the bucket’s handle and EL to lift and dump the bucket into the container .The results of this study showed that the developed interventions could be effective in reducing the overall LBD risk for the bucket handling job; however, the tools were different in their effectiveness in risk reduction among the three tasks. The overall LBD risk for the Manual job was reduced from 58% to around 52% for the EBC and 50% for the EL. This seemingly modest reduction is due to the fact that the overall LBD risk for the “job” is based on the maximum risk factors values observed among all three sub tasks. An approach that incorporates the advantages of each of the two introduced interventions would yield more substantial reduction in the overall LBD risk. Therefore, it is recommended that for lifting the bucket , the EL should be used; whereas, the EBC should be used during the carrying and dumping tasks. This combined approach is expected to provide maximum reduction in the overall LBD risk, since it capitalizes on the strength of each tool in reducing the risk factors within the sub tasks . This approach is expected to be especially effective if the job requires the bucket to be carried over long distances .

Explicit formulations use the data defined in the point cloud to define linear approximations to the SDF

The explicit formulation by Hicken and Kaur uses all points in the point cloud to define the implicit function and shows favorable decay in surface reconstruction error as the number of points in the point cloud NΓ increases. This structure has been used in combination with RBFs for hole-filling in [37] and anisotropic basis functions for representing sharp corners in [40]. Another approach is to construct a uniform grid of points to control the implicit function. Unlike the aforementioned approaches, the distribution of points is decoupled from the resolution of the point cloud. As a result, deformations to the geometric shape can be represented without loss in accuracy near the surface as shown by Zhao et al.. This makes it a popular structure in partial differential equation based reconstruction methods that evolve the surface during reconstruction, such as in [47, 48]. In general, more points representing the implicit function are required to achieve the same level of accuracy to other approaches. As a result, implicit functions defined by a uniform grid are more computationally expensive to solve for in both time and memory usage than the aforementioned approaches, as experienced by Sibley and Taubin, but can be reduced by a GPU-based multi-grid approach as implemented by Jakobsen et al.. The signed distance function presents an ideal candidate for implicit surface reconstruction and geometric non-interference constraints. It is known that the zero level set of the SDF is a smooth representation of the points in a point cloud, indoor grow light shelves and its gradient field is a smooth representation of the normal vector field from the normal vectors in a point cloud. As a result, many formulations to approximate the SDF have been researched for implicit surface reconstruction.

We note that there exists other methodologies, such as wavelets and a Fast Fourier Transform based method, that fit a smooth indicator function instead, but are less applicable for non-interfernce constraints where a measurement of distance is desired. We identify four categories that approximate the SDF in some way: explicit formulations, interpolation formulations with RBFs, PDE-based formulations, and energy minimization formulations. These formulations then apply smoothing to these linear approximations in order to define the level set function. Risco et al. present the simplest approach which uses the nearest edge and normal vector to define the function explicitly. The resultant constraint function is piece wise continuous but non-differentiable at points where the nearest edge switches. Belyaev et al. derive a special smoothing method for defining signed Lp-distance functions, which is a continuous and smooth transition between piece wise functions. Hicken and Kaur use modified constraint aggregation methods to define the function in a smooth and differentiable way. Upon the investigation of Hicken and Kaur, the signed Lp-distance functions give poor approximations of the surface. Additionally, Hicken and Kaur’s formulation is shown to increase in accuracy as the data in the point cloud, number of points NΓ, increases. We identify Hicken and Kaur’s explicit formulation as a good candidate for enforcing non-interference constraints,as it is continuous and differentiable with good accuracy. Another method to construct the level set function is to solve an interpolation problem given an oriented point cloud P. Because the data points of P always lie on the zero contour, nonzero interpolation points for the implicit function can be defined on the interior and exterior, as originally done by Turk and O’Brien. Radial basis functions are then formulated to interpolate the data. To avoid overfitting, thinplate splines can be used to formulate the smoothest interpolator for the data, as noted in [37, 45].

Solving for the weights of a RBF involves solving a linear system, which is often dense and very computationally expensive due to their global support. Turk and O’Brien solve up to 3,000 RBF centers, and improvements by Carr et al. allow up to 594,000 RBF centers to be constructed in reasonable time . On top of the significant computational expense, interpolating RBFs have been criticized for having blobby reconstructions which poorly represent sharp features in the geometric shapes.The vector field is then integrated and fit, usually by a least squares fitting, to make the zero level set fit the point cloud. We classify the methods that solve for the vector field as a solution to a partial differential equations as PDE-based methods. Poisson’s method uses variational techniques to Poisson’s equation to construct a vector field. Improvements to this method add penalization weights to better fit the zero contour to the point cloud in [54]. Tasdizen et al. prioritize minimal curvature and minimal error in the vector field by solving a set of coupled second order PDEs to derive their level set function. Zhao et al. use the level set method, originally introduced by Osher and Sethian, for surface reconstruction, with the advantage of modeling deformable shapes. In the aformentioned PDE-based methods, the setup for the implicit function reduces to solving a PDE by time-stepping or a sparse linear system in the case of Poisson’s equation.In the analysis done by Calakli and Taubin, they found that Poisson’s method often over-smooths some surfaces. We also note that solutions to PDEs are more difficult to implement than other methods in practice. Aquaculture is an important contributor to the Irish economy, producing products to the value of e167 million in 2016, including e105 million from farmed Atlantic salmon . The industry is particularly important along the western seaboard of Ireland.

Most Irish salmon farming is certified organic . Salmon farming in Ireland is associated with an intricate network of fish movements within and between the different types of salmon farms. There are three different farm types, including brood stock, freshwater, and seawater farms. In earlier work , social network analysis was used in combination with spatial epidemiological methods to characterize the network structure of live farmed salmonid movements in Ireland. It was demonstrated that characteristics of the network of live salmonid fish movements in Ireland would facilitate infection spread processes. These included a power-law degree distribution [that is, “scale free”], short average path length and high clustering coefficients [that is, “small world”], with the presence of farms that could potentially act as super-spreaders or super-receivers of infection, with few intermediaries of fish movement between farms, where infectious agents could easily spread, provided no effective barriers are placed within these farms . A small proportion of sites play a central role in the trade of live fish in the country. Similarly, we demonstrated that highly central farms are more likely to have a number of different diseases affecting the farm during a year, diminishing the effectiveness of in-farm bio-security measures , and that this effect might be explained by an increased chance of new pathogens entering into the farm environment . This is a very important area of research in aquaculture, especially considering that the spread of infection via fish movement is considered one of the main routes of transmission . Mathematical models and computer simulations offer the potential to study the spread of infectious diseases and to critically evaluate different intervention strategies . Through access to real fish movement data, these models can be programmed to incorporate both the time-varying contact network and data-driven population demographics. However, there are considerable computational challenges when stochastic simulations are conducted using livestock data, rolling tables both computationally, including the need for efficient algorithms, and also with model selection and parameter inference . An efficient modeling framework for event-based epidemiological simulations of infectious diseases has recently been developed , including the use of a framework that integrates within farm infection dynamics as continuous-time Markov chains and livestock data as scheduled events. This approach was recently used to model the spread of Verotoxigenic Escherichia coli O157:H7 in Swedish cattle . Cardiomyopathy syndrome is a severe cardiac disease of Atlantic salmon. It was first reported in the mid-1980s in farmed salmon in Norway and later detected in several other European countries, including the Faroe Islands , Scotland and, in 2012, in Ireland . CMS generally presents as a chronic disease, leading to long-lasting, low-level mortality, although some individuals experience sudden death. At times, however, CMS can present as an acute, dramatic increase in mortality associated with stress . A recent Norwegian study has identified risk factors for developing clinical CMS, including stocking time, time at sea, a previous outbreak of pancreatic disease or Heart and Skeletal Muscle Inflammation , and hatchery of origin . The economic impact of CMS is particularly serious as it occurs late in the life cycle, primarily during the second year at sea, by which time the incurred expenditure is high. No effective preventive measures are known, and there is no treatment available . In 2009, CMS was identified as a transmissible disease , and has been linked, in 2010 and 2011, to a virus resembling viruses of the Totiviridae family .

The discovery of this virus, piscine myocarditis virus , has contributed to increased knowledge about the disease including the development of new diagnostic, research and monitoring tools . The agent is spread horizontally, between farms at sea, although there is some indication of a possible vertical transmission pathway . Recent Norwegian research has shown that PMCV is relatively widespread, including in geographic regions and fish groups without any evidence of CMS . The mechanisms leading to progression from PMCV infection to CMS are currently unclear . CMS is present in Ireland. The first recorded outbreak of CMS occurred in 2012, associated with low-level mortalities over a period of 4–5 weeks followed by increased mortalities during bath treatment for sea lice . CMS is not a notifiable disease in Ireland, and there are no systematic records of its occurrence. Nonetheless, anecdotal information from field veterinarians and farmers suggest that CMS occurrence has steadily increased over the years. A retrospective study was recently conducted, using real-time RT-PCR with archived broodstock samples dating back to 2006, which suggests that PMCV may have been introduced into Ireland in two different waves, both from the southern part of the range for PMCV in Norway . PMCV was found to be largely homogenous in Irish samples, with limited genetic diversity. Further, the majority of PMCV strains had been sequenced from fish that were not exhibiting any clinical signs of CMS, which suggests possible changes in agent virulent and/or the development of immunity in Irish farmed Atlantic salmon . This paper describes the use of data-driven network modeling as a framework to evaluate the transmission of PMCV in the Irish farmed Atlantic salmon population and the impact of targeted intervention strategies. This approach can be used to inform control policies for PMCV in Ireland, as well as other infectious diseases in the future. Model parameters were estimated from a previous study, which had been conducted in 2016 and 2017 to determine the prevalence of PMCV infection in Irish salmon farms by real-time RT-PCR . The sampling strategy was replicated to ascertain the status that could have been found if simulated farms had been sampled. In this study, sample collection was conducted on 22 farms from 30 May 2016 to 19 December 2017. A ranching farm is a freshwater broodstock farm that releases juvenile fish to the environment for conservation purposes. Some farms were sampled more than once over the course of the study, with the median samplings per farm in this group being 3.5 . A total of 1,201 fish were sampled during the study. Samples consisted of heart tissue across all fish age classes and ova. In this study, PMCV was detected at a low level in most sites, with only one clinical case of CMS occurring during the study period. We simulated sampling at each time point by randomly sampling fish within each farm and age category, as in the observed data set, from the number of susceptible and infected individuals at the time for the sampling point in the simulated farms. The aforementioned observational study also looked for PMCV in archived samples of Atlantic salmon broodstock from 2006 to 2016, seeking to determine whether the agent had been present in the country prior to the first case report in 2012 . For this, archived samples of broodstock Atlantic salmon were tested for each year from 2006 through 2016, using 60 archived pools per year.