Monthly Archives: December 2024

The average of dissimilarities among permutations represents null expectations of community dissimilarities

At that point, the start inoculum was divided into 6 aliquots and stored in glycerol freezing buffer. For each inoculation in the first passage, an aliquot was thawed and cellspelleted for 10 mins at 4000 X G. Cells were re-suspended in 200mL 10mM MgCl2 buffer. Of this, 40mL were and heat killed in an autoclave for a 30 minutes at 121°C. Inoculum was plated, and an absence of growth confirmed that the heat-kill was effective. To get initial concentration of inoculum, dilution plating was performed on Kings Broth agar plates . Soil from each site, which had been stored at -20°C, was combined in a sterile Nalgene bucket and thoroughly mixed before inoculation.Soil inoculation: The top layer of every pot was supplemented with 40 grams of UC Davis Farm Soil. Soil inoculation was only performed once and only for the first passage of plants. Spray inoculation: Each plant was sprayed, using misting spray tops placed in 15mL conicals, with approximately 4.5mL of inocula. Control plants from passage 1 were inoculated with the heat-killed inocula. Control plants from subsequent experiments were inoculated with sterile 10mM MgCl2. Immediately after inoculation, plants were placed in a random order in a high humidity misting chamber for 24 hours. After 24 hours, the plants were moved to a greenhouse bench. Plants were inoculated once per week in the same manner and were placed in the misting chamber for 24 hours after every inoculation. Passage one plants received 5 weeks of inoculation, P2-P4: four weeks, and the fifth cohort: five weeks.Ten days after the final spray inoculation, plants were sampled.

With the exception for plant cohort 5, all plants were cut off at the base and immediately placed into sterile 1L bottles individually. By the end of cohort 5, dry marijuana the plants had grown too large to sample the entire plant, and instead, roughly 2/3 of the plant material was sampled from each plant, with care taken to sample the same age of branches from every plant. After collection, plant material was weighed, and 200mL of sterile 10mM MgCl2 were added to each bottle containing the plant material. The bottles were submerged in a sonicating water bath, sonicated for 5 minutes, vortexed, and sonicated for another 5 minutes. Half of the volume from each plant was pelleted for 10 mins at 4200 X G, re-suspended in ~1mL of 1:1 KB Broth Glycerol, divided into aliquots, and stored at -80°C for inoculation of the subsequent passage. The other half of the volume was pelleted in the same manner and then stored as a pellet at -20°C for DNA extractions. To prepare inoculation of the next passage, microbiome glycerol stocks were thawed, briefly pelleted to remove glycerol, and re-suspended in sterile 10mM MgCl2. Volume of re-suspension depended slightly on the size of the plants, but in general ranged from 5-10mL. Microbiomes were never pooled.Genotypes 2706, 3472, and 2934 were used for this experiment, and four plants of each genotype received each treatment . One control plant of each genotype wasspray inoculated with MgCl2 as a control. To prepare the inoculum, microbiomes from the end of passage one and the end of passage four were used. All aliquots were thawed and combined. The same was done for all of the individual microbiomes that came off of passage 4 plants.

To remove the glycerol, the samples were spun down and re-resuspended in 10mM MgCl2. In order to generate the 50/50 mix of P1 and P4 microbiomes, live/dead PCR with PMA treatment was used, adapted from the following method. Briefly, serial dilutions of P1 and P4 were performed in MgCl2. Each sample then received PMA at a final concentration of 100uM and vortexed. Samples were incubated in the dark at room temp for 5 minutes. Then they were placed in ice on a tray exactly 10cm away from a 700 watt halogen lamp. The light was turned on for 30 seconds, and turned off for 30 seconds. During the 30 seconds without light, the samples were all vortexed. This was repeated three more times. Samples were then pelleted for 10 minutes at 5000 X G. The supernatant including the excess PMA was removed, and cells were re-suspended in sterile 10mM MgCl2. Droplet Digital PCR was then utilized to quantify bacteria from each sample, and concentration was matched to 7.7 x 106 cells/mL. P1 and P4 were aliquotted separately and then recombined for the mixed inoculum so that each plant received ~9 x 104 bacteria each week that they were inoculated. Plants were inoculated for three weeks and harvested 10 days after the final inoculation as described previously.The neutral model was proposed by Sloan et al. to describe both microbial diversity and taxaabundance distribution of a community. Burns et al. have developed a R package based on Sloan’s neutral model to determine the potential importance of neutral process to a community assembly. In brief, the neutral model creates a potential neutral community by a single free parameter describing the migration rate, m, based on two sets of abundance profiles – a local community and meta communities. The local community describes the observed relative abundance of OTUs, while the meta community is estimated by the mean relative abundance across all local communities. The estimated migration rate is the probability of OTU dispersal from the meta community to replace a randomly lost individual in the local community.

The migration rate can be interpreted as dispersal limitation. In each microbiome passage, half of the samples were randomly selected and the relative abundance profile at the OTU level was used. The neutral model fit and migration rate were estimated in the resolution results of 200 iterations for P1, P2, P3, P4, and P4 Combined.We applied a null model approach on the serial passaging data P1-P4 to characterize the changes of stochastic process driving the assembly of plant microbiome over time. Lines that had high quality sequencing data at every time point , were used for this analysis. The null scenario for each line at each passage was generated using the data for that same line at the previous passage. The null scenario of P1 was generated using the original field inoculum sample. The null model approach was based on community pairwise dissimilarity proposed by Chase and Myers and extended by Stegen et al. to incorporate species abundance . Chase and Myers proposed a degree of species turnover by a randomization procedure where species probabilistically occur at each local community until observed local richness is reached. However, the estimated degree of turnover does not include species abundance. To take full advantage of our dataset, we also incorporated species relative abundance into the procedure proposed by Stegen et al. Zinger et al. has developed R code for the null model and applied the null model approach on the soil microbiome. This approach does not require a priori knowledge of the local community condition and determines if each plant microbiome at the current passage deviates from a null scenario generated by that same microbiome at the previous passage. In brief, the null scenario of each was generated by random resampling of OTUs and remained the same richness and number of reads with the original sample. Total OTUs observed in the sample and the corresponding relative abundance were used as probabilities of selecting an OTU and its associated number of reads, respectively. The BrayCurtis distance is used to calculate dissimilarities across null communities with 1,000 permutations. The null deviation shows the differences between average null expectation and the observed microbiome of the same line.Bacteriophage viruses that infect bacteria are both ubiquitous and abundant, so much so that they are estimated to largely outnumber bacterial cells in the environment. Their role in controlling bacterial populations has been studied since their discovery as “bacteria-killers” in the early 1900s, and there are countless studies investigating bacteria–phage pairwise dynamics. Even the fundamental Luria–Delbruck fluctuation experiment that demonstrated mutations arise in the absence of selection was conducted using phage as the selective pressure. In addition to fundamental research, marijuana grow system lytic phages have been widely studied for their use as biological control and therapeutic agents, and have been successfully used in control of plant pathogens, including in tobacco, tomato, detached flowers of Rosaceae trees, and even some ready-to-eat foods such as hot dogs and lettuce leaves. Beyond just controlling bacterial abundance, phage predation in the ocean is thought to impact bio-geochemical cycling and food web processes through bacterial lysis, converting biomass into dis- solved organic matter and contributing to the dissolved organic carbon pool. Recently, increased interest in the human microbiome is also beginning to acknowledge a key role of lytic phages in these communities, but in contrast to free-living microbiota, little empirical work has been done to uncover the role of phages in these systems. Specifically lacking is an understanding of when and how phages shape the abundance and composition of host-associated bacteria. Phages can have important impacts on the competitive dynamics and structure of bacterial communities, and they are predicted to maintain bacterial diversity through a variety of mechanisms. As bacteria evolve to escape phage predation and phages counter-adapt to these resistances, evolutionary and co-evolutionary dynamics can drive important phenotypic and genotypic variation within the bacterial community . One idea that is frequently put forth in marine systems, known as “kill-the- winner” dynamics, suggests the most abundant bacteria should also be the most susceptible to phage predation. In this model, an increase in bacterial abundance is followed by an increase in the associated phage population and a subsequent decrease in bacterial abundance, effectively preventing one type of bacteria from ever dominating the community. Antagonistic co-evolution between bacteria and phage has recently been put forth as a driver of bacterial diversification and variation within the human gut microbiome, with potential impacts on microbiome function and human health through phage-mediated homeostasis or dysbiosis.

The impact of lytic phages on bacterial cell density and community diversity may in part be the result of cell lysis, which not only has a direct effect on the population of cells, but also has an indirect effect on competition among bacterial strains and species within a community. Phages may also increase bacterial density and diversity by releasing nutrients into the environment via lysis byconferring metabolic or morphological traits to bacteria upon integration into the genome . Furthermore, culturing phages can be difficult, as it is reliant on having an isolate of a suitable bacterial host. Despite these difficulties, studying phages in host-associated microbiomes may reveal interactions between bacteria and phages that are different from those occurring in free-living microbial populations. For example, recent work has suggested that microbiomes may be dominated by temperate phages that integrate into the host genome as opposed to lytic phages that lyse their host cells . In this case, changes in bacterial density or composition due to lysis would likely represent only one small way in which phages impact their microbiome. Furthermore, host factors such as age, immunity and health are likely to change the dynamics between bacteria and phages within the host environment . Another key difference in host-associated microbiomes compared to free- living bacterial communities is the process of colonization of a new host. Phages may play a key role in early microbiome establishment, the importance of which will be specific to the mode of microbiome transmission and the diversity and composition of colonizing bacteria. In this work, we sought to investigate the role of lytic phages in shaping bacterial abundance and community composition during colonization of the phyllosphere . We used a filtration method to deplete phages from the microbial community associated with tomato leaves, modified from an approach that has previously proven effective for separation of phage and bacteria in seawater. Through inoculation of a size fractionated field-grown tomato microbiome onto juvenile, growth chamber-grown plants, we are able to test whether the lytic phage fraction of the phyllosphere has an impact on bacterial abundance, composition and diversity during microbiome establishment. We find an impact of phages on overall bacterial abundance and relative abundance of specific taxa that is measurable after 24 hr, but not at 7 days post-inoculation. We also find evidence for slightly higher alpha diversity after 7 days in those communities in which phages were initially present in the inoculum relative to those in which they were depleted.

PMA treatment blocks the amplification of dead cells and allows the measurement of only live cells

We found one particularly protective seed-associated microbial community that was able to significantly decrease the density of P. syringae pv tomato DC3000 growth on seedlings and reduce disease symptoms across multiple tomato types. Community profiling uncovered that Pantoea spp. dominated this seed microbiome, regardless of which seedling type it was applied to, and we were able to culture specific Pantoea strains directly from the surface of these seeds. When we applied these culturable isolates to seeds, we found that individual strains were as protective when applied in isolation as when combined. In order to understand how application density impacts protection, we varied the dose of isolates ZM1, ZM2, and ZM3, and we found a non-linear pattern of inoculation density correlation with pathogen density. The seed surface is the primary site of contact between seed and fruits, and it is known to harbor a diversity of microbes across plant species. Despite this, few studies have included seed epiphytes when investigating seed-associated microbes, focusing primarily on seed endophytes , endogenous seed-microbiota were found to suppress disease symptoms in juvenile seedlings of their natural hosts when challenged with a common tomato pathogen, Pst . When TT4 microbiota was inoculated onto two other field tomato types, it was able to significantly reduce disease symptoms and decrease the density of Pst by 10 to 100-fold . Although the tomato types themselves differed in their overall susceptibility to disease, we did not observe that any single tomato type was more protected by the TT4 microbiome than another. This may suggest that the pathogen- suppressive effects of TT4, pipp mobile storage whether attributable to microbiome members with antagonistic activities against Pst or immune system priming, are capable of acting independently of their host genotypic context.

Due to the way in which tomatoes were collected , we were not able to point to the specific differences amongst host genotypes, but this would be useful in future studies. Furthermore, as we did not sequence the microbiome of adult plants from which seeds were collected, future work should explore if differences amongst seed microbiomes are driven by differences in the microbiome composition of the adult plants themselves. These microbiome differences may be a result of field location, host genotype, or other unknown factors. Our data suggest that it may be possible to breed plants to specifically recruit or harbor beneficial seed microbiomes that may ensure a more disease resistant crop in subsequent generations. To better understand the protective effects observed, we sequenced the bacterial communities associated with seedlings inoculated with the TT4 microbiome and found the communities to be dominated by Pantoea spp . This is in line with community profiling results from the seed surface of Triticum and Brassica. We then isolated culturable bacteria from seeds, and again found primarily Pantoea spp. Inoculation of seeds with our Pantoea isolates showed that they are highly protective against Pst, both in terms of colonization and disease . Pantoea spp. is a known antagonist of many bacterial as well as fungal pathogens, and they are common biocontrol strains. Pantoea dispersa strain ZM1 appears to be novel and not previously described as a biocontrol species, but provides protection that is on par with, if not better than, currently commercially available strains. Genome sequencing will reveal if P. agglomerans strains ZM2 and ZM3 are novel bio-control strains. Our work also helps to disentangle the link between diversity and disease protection. Although there exists a speculative relationship between taxonomic diversity and the strength of a microbiome’s disease-resistance effect, little empirical evidence exists to support or disprovethis.

A recent study on the protective effect of a constructed community against Pst shows that variation in inoculum diversity affects disease-resistant effects in a significantly non-linear manner, demonstrating that increasing taxonomic diversity can have no impact, or even decrease, the protective effect of the community. In this study, seedling bacterial communities are low in richness and diversity, dominated primarily by one genus: Pantoea, although there are multiple species and strains of Pantoea. It is possible that a greater diversity of bacteria existed on the seeds, but we still find that inoculation of individual strains of Pantoea is sufficient in seedlings for protection against the pathogen used in this study. We are also aware that the fermentation step used to collect seeds might have enriched certain members of the seed microbiome that are able to survive acidic conditions. However, this may be a biologically relevant filtering step for epiphytic seed microbes, as seeds are likely to experience acidic conditions both during fermentation of fruit in the field or through the digestive track of animals. Although we only test the protective ability of isolates against a bacterial pathogen, the Pantoea isolates and the Bacillus isolate may have other growth promoting capabilities as well, as a recent paper describes various growth promotion traits of tomato seed endophytes. They may also have protective effects against fungal pathogens, as has been previously demonstrated in Pantoea species. Practically, seed associated bacteria are an excellent target for probiotic/biocontrol application, and it may even be possible to apply the protective strain to the flowers of the previous generous, as was demonstrated with a plant growth promoting endophyte. Taken together, these studies and our results suggest that the common agricultural practice of seed sterilization may be disrupting persistent mutualisms between plants and microbes across generations.

While seed sterilization is an agriculturally important procedure to purge seed-transmitted pathogens, we and other groups have shown that it may also be removing beneficial symbionts. How the simultaneous disruption of pathogenic and mutualistic symbioses would impact host health over ecological time scales, and how agricultural practices should preserve the beneficial traits conferred by the vertically transmitted microbiome while still preventing the spread of pathogens, are outstanding questions in need of future research. The focus of this work was to examine the potential protective effects of seed epiphytic communities rather than describe the mechanisms underlying protection. Previous work has demonstrated that some Pantoea spp. are protective through antibiosis activity, or it may also be mediated through competition for resources. In addition to direct interactions between microbes, application of Pantoea spp. to seeds ensures that germinating seedlings are in immediate contact with microbes, and this may prime the plant’s immune system so that it is better able to mount a response against Pst, thus indirectly protecting against disease. In our experiments, the data suggest that both direct and indirect mechanisms are mediating protection. We find that all strains of live bacteria, including a non-plant associated strain of E. coli, are capable of decreasing disease severity symptoms when compared to non-treated controls. When seeds are treated with UV-killed bacteria, we find that none of the strains are capable of decreasing disease severity. Our results suggest that all bacteria included in our experiment can protect seedlings against Pst through direct interactions, as UV-killed bacteria were unable to decrease disease severity. Additionally, we found that all live isolates except for E.coli lowered Pst densities in seedlings, suggesting that this characteristic may be unique to our Pantoea isolates. When seedlings were treated with UV-killed bacteria, we again found that ZM1, ZM2, and ZM3 were capable of lowering Pst densities, but E.coli was not. This suggests that some of the protective capability we are observing is conferred through indirect mechanisms, grow rooms perhapsthrough immune activation by UV-resistant membrane-bound antigens. The inability of UVkilled E. coli to decrease Pst density suggests that these Pantoea strains may have plant host or pathogen specific protective traits. Future work will explore the protective ability of these isolates in adult plants and will further dissect direct versus indirect mechanisms of protection.

By varying the concentration of Pst inoculated onto seedlings, we observed that increasing Pst increases AUDPC, as expected . Interestingly, we also observed a decoupling of plant disease symptoms and pathogen density. This was similarly observed when we tested for protective effects of each isolate . Here, we saw a linear increase of disease severity as Pst inoculation was increased in inoculation density, but saw a much weaker linear correlation between dose and Pst densities. We posit that this non-linear increase of Pst density might either be due to 1) a carrying capacity of Pst density that is reached on the seedling leaves, or 2) the possibility that the detection of dead or inactive Pst cells disguises a linear pattern. Furthermore, disease severity was calculated based on foliar symptoms, but it is very likely that Pst also asymptomatically colonizes the seedling root tissue. The entire seedlings, including roots, were homogenized prior to Pst quantification; this may have obscured differences in foliar Pst densities. By varying the dose of the protective strain , we were able to find that an increased dose does not necessarily correlate with decreased pathogen density, as was recently uncovered in a study investigating the protective effects of the phyllosphere community in adult tomato plants. We vary the dosage of protective Pantoea strains from less than one CFU/seed to 108 CFU/seed, the highest of which is five orders of magnitude higher than the concentration at which we originally recovered bacteria on the seeds . When analyzing Pst density seven days after inoculation, all culturable isolates’ ability to Pst suppress growth resulted in a non-linear pattern of pathogen density, whereby increasing Pantoea does did not linearly correlate with decreasing Pst density. The same is true for the two commercially available bio-control strains. Most notably, all three TT4 isolates exhibit optimal suppression of Pst at densities close to that found in naturally occurring seeds . At isolate densities above 104 CFU/seed, Pst density as detected by ddPCR, reached a similarly high level for all strains, suggesting a maximum density beyond which additional cells of the protective strains do not result in further protection. In light of our results that UV-killed Pantoea are capable of decreasing Pst density through presumed plant-immune activation, we posit that this activation, or priming, may be dependent on bacterial density on the seeds. In such a model, induction of resistance responses in the plant would be fully activated when such a threshold of signal is achieved. This is further supported by the result that UV-killed isolate C9-1 was the only Pantoea isolate unable to decrease Pst density , and its dose response curve was also the only one that did not follow the cubic pattern observed in the other Pantoea isolates. To rule out the possibility that higher densities of Pantoea resulted in the killing of Pst, a scenario that would be undistinguishable because of the use of ddPCR to quantify the pahtogen, we treated samples with a PMAxx TM and repeated the ddPCR. The data are quantitatively similar , indicating that even when only live cells are quantified, Pst densities reach an asymptote. Our results are suggestive of the possibility that plants may not only preferentially passage beneficial symbionts, . Tomato Type 1-3 were collected from non-neighboring lanes from one field, and the heirloom variety program; USDA-NIFA award # 2015-51300-24157 was collected from a neighboring field. Fruits were transported to UC Berkeley on ice and immediately stored in 4°C until processing. Intact tomato fruits from the same type were pooled in a sterile 1L beaker until they reached roughly the 500 mL line . To ensure that no additional microbes other than those found naturally were introduced to the seed surface, we surface sterilized the tomato fruits themselves before processing of seeds. Tomatoes were submerged in 75% ethanol for 20 minutes. They were then washed with sterile double-distilled H2O three times. The last wash was plated onto Kings Broth agar, and no colony forming units were detected. Sterilized tomatoes were then pooled into another sterile one-liter bottle, crushed with sterile forceps and spatula until becoming a thick fruit mixture, and allowed to ferment at room temperature for seven days. We employed this as a common seed collection method for removal of seeds from the fruit endocarp. After fermentation, seeds were then strained out from the fermented liquid with a sterilized metal strainer, minimally washed with sterile ddH2O to remove any excess fruit, and dried on filter paper within sterile petri dishes. All procedures were carried out sterilely in a Biological Safety Cabinet. Harvested seeds were stored in sterile petri dishes in darkness at 21°C, and these same seed stocks were used for all experiments.

The overconsumption of nutritive sugars continues to be a major dietary problem in different parts of the world

These results confirm that, even without the celestial instruments used by Europeans at the time of their arrival , the people in the Basin of Mexico could maintain an extremely precise calendar that would have allowed for leap-year adjustments simply by using systematic observations of sunrise against the eastern mountains of the Basin of Mexico. A recent report indicates than an average American consumes about 17 teaspoons of added sugar daily, which is nearly twice the amounts of the 6 and 9 teaspoons, recommended for women and men, respectively. This dietary behavior is linked to various adverse health effects such as increased risk of diabetes, obesity, high blood pressure and cardiovascular diseases. Hence, there are worldwide efforts to reduce sugar consumption. For instance, the World Health Organization made a conditional recommendation to reduce sugar consumption to less than 5% of the total caloric intake, along with a strong recommendation to keep sugar consumption to less than 10% of the total caloric intake for both adults and children. Currently, added sugar consumption accounts for approximately 11–13% of the total energy intake of Canadian adults, is greater than 13% in the US population, and is as high as 17% in US children and adolescents, the latter principally from sugar-sweetened beverages . Consequently, taxes on SSB have been proposed as an incentive to change individuals’ behavior to reduce obesity and improve health. Notably, dry racks for weed the city of Berkeley, CA, USA successfully accomplished a 21% decrease in SSBs consumption within a year of implementation. Therefore, it is expected that more states and cities will adopt this policy.

On the regulatory level, the U.S. Food and Drug Administration updated the Nutrition Facts label requirement on packaged foods and beverages, starting 1 January 2020, to declare the amount of added sugars in grams and show a percent daily value for added sugar per serving. The expansion of these efforts to spread the awareness on sugar consumption habits and the resulting health issues has generated demand for safe, nonnutritive sugar substitutes. There are many sweeteners on the market to help consumers satisfy their desire for sweetness; however, each of the sweeteners available to consumers has specific applications and certain limitations. Artificial sweeteners have been used as sugar substitutes in numerous applications; however, their long-term effects on human health and safety aspects remain controversial. For example, ATS appear to change the host microbiome, lead to decreased satiety, alter glucose homeostasis, and are associated with increased caloric consumption and weight gain. Moreover, some health effects such as dizziness, headaches, gastrointestinal issues, and mood changes are associated with the consumption of a commonly used ATS, aspartame. Additionally, Kokotou et al. have demonstrated the impact of ATS as environmental pollutants, concluding that when artificial sweeteners are applied in food products or eventually enter the environment, their transformation and/or degradation may lead to the formation of toxic substances. Consequently, there is currently an increase in the production of natural sugar alternatives based on the shift in consumer preferences toward more natural products to meet their dietary need and restrictions. Stevia, the common name for glycoside extracts from the leaves of Stevia rebaudiana, is a natural, sweet-tasting calorie-free botanical that is currently gaining popularity as a sugar substitute or as an alternative to artificial sweeteners. Recent reports project the annual growth rate of stevia compounds to be 6.1% and 8.2%, during 2015–2024 and 2017–2024, respectively.

Stevia has gained industry acceptance in recent years due to its ease of cultivation in several countries across the globe and its high sweetness index . This shows that the growth of stevia’s use as a sugar substitute, despite taste limitations of the marketed glycosides, was contingent on the feasibility of its large-scale manufacturing. Thaumatin, monellin, manbinlin, pentadin, brazzein, curculin, and miraculin are sweet tasting proteins that are naturally expressed in tropical plants. Studies have found that human T1R2-T1R3 receptors expressed in taste buds in the mouth and recognize natural and synthetic sweetness while T1R1-T1R3 recognize the umami taste. These receptors, which have several binding sites, are activated when the compounds that elicit sweet taste bind to them. However, these proteins have unique binding properties and do not all bind at the same sites, which leads to varying perception of sweetness. This work focuses on thaumatins, a class of intensely sweet proteins isolated from the arils of the fruits of the West-African plant Thaumatococcus daniellii. The distinctiveness of thaumatin lies in its sweetness index being up to 3500 times sweeter than sugar. According to the 2008 Guinness World Records, it is the sweetest natural substance known to mankind. Thaumatin I and II, the two main variants of the protein, are comparable in their biological properties, structure, and amino acid composition. The structure consists of a single polypeptide chain of 207 amino acids that are linked together by 8 disulfide bonds. The two variants differ by only five amino acid residues. Through chemical modifications and site-directed mutagenesis, it has been determined that the residues on the cleft-containing side of the protein have the strongest effect in eliciting sweetness to taste receptors on the tongue. The specificity of these residues demonstrates the importance of the protein structure in inducing thaumatin’s sweetness.

In the USA, extracted thaumatin and thaumatin B-recombinant were initially affirmed Generally Recognized as Safe flavor enhancers/modifiers, but not as sweeteners. In the USA, plant-made thaumatin I and/or thaumatin II were granted GRAS status by the FDA in 2018 for use as a sweetener . In 2020, the FDA granted GRAS status to recombinant thaumatin II produced in Nicotiana plants for use as a sweetener and as a flavor enhancer/modifier . In the EU, thaumatins are allowed as both sweeteners and flavor enhancers. Thaumatin’s safety has been extensively documented. The Joint FAO/WHO Expert Committee on Food Additives report claims that the protein is free from any toxic, genotoxic, or teratogenic effects. Thaumatin is currently used as a flavor modifier in food applications such as ice creams, chewing gum, dairy, pet foods, soft drinks, and to mask undesirable flavor notes in food and pharmaceuticals. The current top global thaumatin manufacturers are Naturex , France; Beneo Palatinit, Germany; Natex, UK and KF Specialty Ingredients, Australia. The global production of thaumatin increased to 169.07 metric tons in 2016 from 138.47 MT in 2012. However, the current production method through aqueous extraction from the fruits of the tropical plant T. daniellii limits its availability while the demand is increasing. T. daniellii is not cultivated and harvesting of the arils takes place in plants growing wild in rainforests of West Africa ranging from Sierra Leone to the Democratic Republic of Congo. The current production process is substantially dependent on the availability and quality of the native plant from year to year, which limits thaumatin’s use as a commodity product. The emergence of recombinant DNA technology and the use of cultured cells have allowed the production of proteins in large quantities. Enzymes and structural proteins are used in many industrial applications including the production of food and beverages, biodiesel, cosmetics, biopolymers, cleaning materials, and waste management. Most importantly, recombinant production allows for the expression of a protein outside its native source. Therefore, there exists a viable alternative to secure the desired quantities of thaumatin reliably and sustainably, without impacting rainforest ecosystems. Notably, there have been many attempts to produce thaumatin by means of genetically engineered microorganisms and plants. Despite successfully expressing thaumatin in yeast, bacteria, fungi, and transgenic and transfected plants, biotechnological large-scale production facilities have yet to be established. Molecular farming, the production of recombinant proteins in plants, offers several advantages over bioreactor-based systems. In this application, plants are thought of as nature’s single use bioreactors, offering many benefits such as reduced upstream production complexity and costs , linear scalability, and their inability to replicate human viruses. Specifically, open-field growth of plants has the potential to meet the market’s need for a large-scale, continuous demand of a commodity product at a competitive upstream cost. It has been marked suitable for this operation as plants can be easily adapted on an agricultural scale to yield several metric tons of the purified protein per year. Here, we present a feasibility study for a protein production level of tens of metric tons per year. The success of a new product in the biotechnology process industry depends on well-integrated planning that involves market analysis, product development, process development, hydroponic rack system and addressing regulatory issues simultaneously, which requires some decisions to be made with limited information. This generates demand for a platform to help fill in those gaps and facilitate making more informed process and technology decisions.

Process simulation models can be used in several stages of the product life cycle including idea generation, process development, facility design, and manufacturing. For instance, based on preliminary economic evaluations of new projects, they are used to eliminate unfeasible ideas early on. During the development phase of the product, as the process undergoes frequent changes, such models can easily evaluate the impact of these changes and identify cost-sensitive areas. PSMs are also useful for directing lab and pilot-scale studies into areas that require further optimization. Additionally, PSMs are widely used in designing new manufacturing facilities mainly as a tool for sizing process equipment and supporting utilities, as well as for estimating the required capital investment and cost of goods. This ultimately helps companies decide on building a new facility versus outsourcing to contact manufacturers. There are currently few published data-driven simulations of techno-economic models for plant-based manufacturing of proteins for pharmaceutical, bio-fuel, commercial enzyme, and food safety applications. However, to the best of our knowledge, no studies have proposed or assessed the feasibility of plant-based protein bio-production platforms on the commodity scale in tens of metric tons per year. The feasibility of production at this scale is critical for the emergence of thaumatin as a sugar substitute. Here, we present a preliminary process design, process simulation, and economic analysis for the large-scale manufacturing of thaumatin II variant by several different molecular farming production platforms.The base case scenario assumes an annual production capacity of 50 MT thaumatin. To achieve this level of production in a consistent manner, manufacturing is divided into 157 annual batches. Upstream production is attainable through open-field, staggered plantation of Nicotiana tabacum plants. Each batch has a duration of 45 days and a recipe cycle time of 2 days. A full list of process assumptions can be found in Table S1. The proposed design achieves the expression of thaumatin in N. tabacum leaves using magnICON® v.3. This technology developed by Icon Genetics GmbH allows for the separation of the “growth” and the “expression” phases in a manufacturing process. Moreover, this process obviates the need to use agroinfiltration, which requires more capital and operational costs for inoculum preparation and implementation of expensive units for the infiltration process, containment of the genetically engineered agrobacteria, and elimination of bacteria-derived endotoxins. In this design, transgenic N. tabacum or N. benthamiana plants carry a double-inducible viral vector that has been deconstructed into its two components, the replicon and the cell-to-cell movement protein. Background expression of recombinant proteins prior to induction remains minimal; however, inducible release of viral RNA replicons—from stably integrated DNA proreplicons—is triggered upon spraying the leaves and/or drenching the roots with a 4% ethanol solution resulting in expression levels as high as 4.3 g/kg fresh weight in Nicotiana benthamiana. Nonetheless, Nicotiana tabacum has several advantages that make it more suitable for large-scale open field production such as field hardiness, high biomass yields, well-established infrastructure for large-scale processing, plentiful seed production, while attaining expression levels up to 2 g/kg FW. Furthermore, it is unlikely that transgenic tobacco material would mix with material destined for the human food or animal feed chain, unless it is grown in rotation with a food crop, but further development of strict Good Agricultural Practice for transgenic plants should overcome these issues.An alternative upstream facility design scenario was developed to evaluate the process economics of a more controlled supply of thaumatin by growing the plant host in a 10-layer vertical farming indoor environment. Nicotiana benthamiana is chosen as a host because it is known to be a model for protein expression for both Agrobacterium and virus-based systems, but its low biomass yield and difficulties regarding adaptation in the field hinder its application for open outdoor growth. However, this species grows very well in indoor, controlled environments and has high recombinant protein production.

Negotiations occurred in secret and the agreement was signed before it became public

AAFNs work “against the logic of bulk [high volume, low cost] commodity production, alternative food networks redistribute value through the food chain, reconvene ‘trust’ between producers and consumers, and articulate new forms of political association and market governance” . They are often, but not always, rooted in agroecological farming practices . AAFNs regularly use the trust and engagement generated through alternative forms of distribution to increase access to healthy, fresh, and diverse foods among consumers while providing farmers with diverse revenue streams, and risk sharing and direct marketing strategies that cut the costs of distribution and decrease reliance on industrialized agri-food systems. AAFNs generally emerge as partnerships connecting DFS farmers with citizens, consumers, governments, food and agricultural enterprises, and environmental and social justice organizations through the development of various institutions ranging from farmers’ markets, urban gardens, and community-supported agriculture at local and regional scales, to fair trade producer cooperatives, slow food movements, and peasant organizations at the global scale . These partnerships represent a new wave of social activism as Northern and Southern communities and NGOs increasingly focus on the politics and cultures of food, pipp racking system and identify economic incentives to transform industrialized agrifood into alternative systems that seek to produce and distribute healthy, environmentally sustainable, and socially just food.

The equitable treatment of producers is central to achieving broader adoption of DFS. If farmers are impoverished or are forced to compete with subsidized producers or importers from the industrialized food system, they are less likely to sustain diversified farming practices. Farmers markets are one example of efforts that more equitably support small-scale producers, as well as urban consumers. The estimated 7525 farmer markets in the U.S. offer local civic outlets that may generate social, economic, and cultural incentives for DFS among local farmers while encouraging a more diverse diet of fresh foods among eaters . Farmers markets can provide a mechanism for farmers to reach consumers directly, educate them about DFS practices, and bypass the processing and distribution infrastructure of the industrialized agri-food systems. Yet, while farmers markets and other AAFNs may help develop and maintain DFS and vice versa, they do not yet adequately recognize ecological diversification and sustainability as core values. Farmers markets often provide a venue for organic agriculture, but they rarely use ecological sustainability as a criterion for allowing producer participation, and such markets may also include organic foods harvested from industrial monoculture . In addition, while farmers markets may improve equity for smaller scale growers, they may not provide equity for consumers. Although recent policies have sought to address these challenges, less than 20% of farmers markets accepted food assistance vouchers in 2009 . Farmers markets may not reach poorer socioeconomic groups, due to both price and location.

Efforts are underway to increase the number of farmers markets accepting government food assistance vouchers . In Northern countries, environmental justice advocates have recently started to promote sustainable agriculture and/or agroecology as part of a multi-pronged, holistic strategy for pursuing food and environmental justice across the entire production chain to remedy the environmental inequalities associated with industrialized agricultural systems . These inequalities can be traced back to how, under what conditions, and by whom food is produced, processed, distributed, and consumed, and the role of corporations and governments in shaping these conditions. Food justice issues include the unfair treatment of workers in housing, health, and labor conditions ; agrochemical exposure health risks to workers, communities, and consumers ; loss of ecosystem services such as water and soil ; creation of pollution/wastes that affect surrounding communities ; lack of farm and food worker access to healthy foods ; and loss of access to land . By addressing these issues, food justice activism is evolving toward a strategy that encompasses both social justice and ecological sustainability . These local and national efforts are complemented by several international projects to create AAFNs and connect them to sustainable agriculture. One example is the global fair trade movement, which aims to enable consumers, often in developed countries, to pay more equitable prices to cover the full costs of production and ensure sustainable farmer livelihoods. Fair trade is not synonymous with DFS or sustainable agriculture because its criteria focus primarily on the social and economic aspects of trade and production.

However, the Mesoamerican smallholders who cofounded this movement with political and religious activists manage agricultural systems that are far closer to DFS than industrial monocultures . Their shade coffee systems now often resemble native forests and help conserve biodiversity, reduce soil erosion, conserve water, improve microclimates and resist hurricane damage . Farmers’ connections to smallholder cooperatives and global fair trade networks also partially mitigated vulnerability to crashing coffee commodity prices . New social movements also increasingly promote agroecology as central to their agenda for transforming the industrialized agri-food system at local, national, and global scales . In particular, a food sovereignty agenda has emerged from the aspirations and survival needs of smallholders and indigenous social movement leaders in the Global South . Food sovereignty refers to the right of local peoples to control their own agricultural and food systems, including markets, resources, food cultures, and production modes, in the face of an increasingly globalized economic system. This approach contrasts with charity-based food security models that have occasionally buffered human populations from famines , yet do not address root causes of hunger and care little for how, where, and by whom food is produced . It also contrasts with dominant neoclassical trade liberalization policies that open up domestic markets worldwide to competition from multinational corporations, which has often resulted in import dumping, the erosion of smallholder livelihoods, and greater industrialization of agriculture . Food sovereignty movements promote agrarian reforms, resist state and corporate land grabs, and critique proposals that contribute to farmer debt and dependence . In recent decades, the food sovereignty movement has endorsed the agroecological approaches and the social process methodologies promoted through the Campesino-aCampesino movement . Despite the potential of AAFNs such as farmers markets and fair trade networks to sustain and promote DFS, many alternative agri-food activities have come to resemble the industrialized agri-food systems they set out to transform. For example, the dramatic growth in organic sales in the past two decades facilitated by product certification has promoted the expansion of large-scale industrialized organic monocultures to supply this new demand even though the founding principles of organic agriculture included DFS practices . Alternative producers sometimes justify this by arguing that large-scale, industrialized methods are the fastest way to “scale up” alternative farming practices so that they can compete in supply chains with conventionally managed systems . In search of new markets, many dominant food corporations have purchased and integrated successful organic producers and alternative food companies into their product portfolios . This trend of purchasing “sustainable” product businesses is also observed in other sectors, such as personal care, paper, and cleaning chemicals. A growing body of literature on green consumerism raises the issue of corporate “green washing”. Researchers suggest that expanding corporate control over alternative products can generate some benefits . Yet these changes may accelerate efforts to industrialize production rather than expand alternative systems . These developments call for careful scrutiny of the changing standards, price premiums, ingredients, farm level practices, and benefits to producers and consumers .In parallel, fair trade labeling organizations initially certified exports from smallholder organizations only, thus frequently supporting DFS. However, recent changes to standards now allow transnational agricultural trade companies to export certified Fair Trade products in direct and potentially unfair competition with the smallholder organizations that this system intended to empower .

The dominant U.S. Fair Trade certification agency has ignored strong protests from smallholder farmer organizations in recently allowing large coffee plantations to sell certified Fair Trade coffee. For instance, a growing portion of Fair Trade certified coffee sold in the U.S. now originates in Brazil and Colombia in production systems supporting fewer and less diverse shade trees than Mesoamerican smallholders . In this light, pipp vertical racks many enterprises and organizations within the rapidly mainstreaming AAFNs are now trying to restrengthen their connections to sustainable agriculture and their original social goals through innovative organizational reforms. They are de-emphasizing the certification systems that they once pioneered and moving toward food sovereignty and food justice that promote the power of participants to control or coordinate their parts of the larger food system. These trends could enable the spread of DFS while simultaneously promoting the often overlooked social equity and participatory process dimensions of sustainable agriculture . However, until recently, these movements have represented relatively small counter trends compared to the dominant certified and organic components of the industrialized agrifood system. Certifications and market-based incentives could be an important component of many DFS oriented transition processes. However, broader institutional support is certainly needed. Furthermore, the leading sustainability certifications increasingly do not appear to reward the diverse forms of ownership, management, and local collaboration that would be needed to ensure the landscape-scale nature of DFS, and their standards have become increasingly flexible as they increasingly include industrial production systems .The expansion of large-scale industrialized monoculture systems of agriculture often occurs at the expense of more diversified farming systems. The widespread transformation of agriculture to large-scale monoculture systems began with the European colonial plantations of the 1500-1800s , and expanded with the mechanization of agriculture in the late 1800s and the introduction of synthetic fertilizers and pesticides by the mid 20th century. By the 1960s, a wave of agricultural science and technological innovations had created the “Green Revolution,” an integrated system of pesticides, chemical fertilizers, and genetically uniform and high-yielding crop varieties that governments, companies, and foundations vigorously promoted around the world . In the subsequent fifty years, the expansion of industrialized agriculture increased global nitrogen use eight fold, phosphorus use tri-fold, and global pesticide production eleven-fold . By 2000, Green Revolution crop varieties were broadly adopted throughout the developing world, e.g., circa 90% of Latin America for the area under wheat, and circa 80 % in Asia for the area under rice , and the world’s irrigated cropland doubled in area . Encouraged by a range of economic factors, including the incentives of U.S. federal commodity programs, the pressures of global market competition, neoliberal economic reforms, historically inexpensive synthetic inputs, and the advantages of economies of scale, field and farm sizes increased in some areas, while non-crop areas in and around farms decreased, leading to higher levels of homogeneity at both the field and landscape scale . Several recent signs of the continued expansion of industrial agriculture are seen in the rapid growth of land grabs, bio-fuel production, and plantations across the Global South. Land grabbing refers to the practice of agri-food companies, commodity traders, pension funds, and nationally-owned investment banks buying land in other countries for eventual large-scale food and resource production in response to food security concerns and food speculation . For example, the provincial government of Rio Negro in Argentina recently agreed to lease up to 320,000 ha of land to Beidahuang, a Chinese government-owned agri-food company, to produce soybeans, wheat, and oilseed rape primarily for animal feed . Local farming communities are now organizing against the deal, contending that they will be displaced by the industrialized irrigation methods being planned. Estimates of the global scale of land grabbing are scarce and largely based on media reports. Whereas the International Food Policy Research Institute estimates that 20 million ha of land were sold for land grabs between 2005 and 2009, the World Bank calculates that around 57 million ha have attracted foreign interest . The expansion of large-scale commercial agriculture has also caused deforestation of some of the most bio-diverse forests in the world, such as in the Amazon, for soybean production , and in Southeast Asian rain forests, for oil palm . Since the 1990s, particularly in Brazil and Indonesia where the greatest amount of deforestation occurred, the agents of deforestation shifted from primarily smallholder to enterprise-driven agriculture for global markets . Much recent forest loss, along with agricultural land conversion, can be attributed to the rapid growth in bio-fuel production, centering in Southeast Asia and Latin America but expanding to Africa.

Cable tool drilling generally is less labor-intensive but takes more time than rotary drilling

The uppermost section of the annulus is normally sealed with a bentonite clay and cement grout to ensure that no water or contamination can enter the annulus from the surface. The depth to which grout must be placed varies by county. Minimum requirements are defined in the California Well Standards : 50 feet for community water supply wells and industrial wells and 20 feet for all other wells. Local county ordinances may have more stringent requirements depending on local groundwater conditions. At the surface of the well, a surface casing is commonly installed to facilitate the installation of the well seal. The surface casing and well seal protect the well against contamination of the gravel pack and keep shallow materials from caving into the well. Surface casing and well seals are particularly important in hardrock wells to protect the otherwise open, uncased borehole serving as a well.Wells can be constructed in a number of ways. The most common drilling techniques in California are rotary, reverse rotary, air rotary, and cable tool. Auger drilling is often employed for shallow wells that are not used as supply wells. In unconsolidated and semi-consolidated materials, rolling benches rotary and cable tool methods are most commonly employed. Hardrock wells generally are drilled with air rotary drilling equipment. Properly implemented, all of these drilling methods will produce equally efficient and productive wells where ground water is available.

Reverse rotary and rotary drilling require large amounts of circulation water and the construction of a mud pit, something to be considered if the well is to be drilled in a remote location with no access to water. During drilling, drillers must keep a detailed log of the drill cuttings obtained from the advancing borehole. In addition, after the drilling has been completed but before the well is installed, it is often desirable to obtain more detailed data on the subsurface geology by taking geophysical measurements in the borehole. Specialized equipment is used to measure the electrical resistance and the self-potential or spontaneous potential of the geological material along the open borehole wall. The two most important factors that influence these specialized logs are the texture of the formation and the salinity of the ground water. Sand has a higher resistance than clay, while high salinity reduces the electrical resistance of the geological formation. Careful, professional interpretation of the resistance and spontaneous potential log and the drill cuttings’ description provides important information about water salinity and the location and thickness of the aquifer layers. The information obtained is extremely useful when finalizing the well design, which includes a determination of the depth of the well screens, the size of the screen openings, and the size of the gravel pack material. Because of timing issues, it is better—especially in remote areas—to drill a pilot hole a good deal ahead of the well construction date and obtain all pertinent log information early on from the pilot hole. The well design can then be completed and the proper screen, casing, and gravel materials can be ordered for timely delivery prior to the drilling of the well. Note that a copy of all well log information should be given to the person who pays for the drilling job.

The Department of Water Resources keeps copies of all well logs and has a large collection of past well logs. These can be requested by a well owner if the original records are unavailable. The well log contains important information about construction details and aquifer characteristics that can be used later for troubleshooting well problems.After the well screen, well casing, and gravel pack have been installed, the well is developed to clean the borehole and casing of drilling fluid and to properly settle the gravel pack around the well screen. A typical method for well development is to surge or jet water or air in and out of the well screen openings. This procedure may take several days or perhaps longer, depending on the size and depth of the well. A properly developed gravel pack keeps fine sediments out of the well and provides a clean and unrestricted flow path for ground water. Proper well design and good well development will result in lower pumping costs, a longer pump life, and fewer biological problems such as iron-bacteria and slime build-up. Poorly designed and underdeveloped wells are subject to more frequent pump failures because sand and fines enter the well and cause significantly more wear and tear on pump turbines. Poorly designed and underdeveloped wells also exhibit greater water level draw down than do properly constructed wells, an effect referred to as poor well efficiency. Poor well efficiency occurs when ground water cannot easily enter the well screen because of a lack of open area in the screen, a clogged gravel pack, bacterial slime build-up, or a borehole wall that is clogged from incomplete removal of drilling mud deposits. The result is a significant increase in pumping costs. Note that well efficiency should not be confused with pump efficiency.

The latter is related to selection of a properly sized pump, given the site-specific pump lift requirements and the desired pumping rate. Once the well is completed and developed, it is a good practice to conduct an aquifer test . For an aquifer test, the well is pumped at a constant rate or with stepwise increased rates, typically for 12 hours to 7 days, while the water levels in the well are checked and recorded frequently as they decline from their standing water level to their pumping water level. Aquifer tests are used to determine the efficiency and capacity of the well and to provide information about the permeability of the aquifer. The information about the pumping rate and resulting pumping water levels is also critical if you are to order a properly sized pump. Once the well development and aquifer test pumping equipment is removed, it may be useful to use a specialized video camera to check the inside of the well for damage, to verify construction details, and to make sure that all the screen perforations are open.The construction of the final well seal is intended to provide protection from leakage and to keep runoff from entering the wellhead . Minimum standards for surface seals have been set by the California Department of Water Resources . It is also important to install back flow prevention devices, especially if the well water is mixed with chemicals such as fertilizer and pesticides near the well. A back flow prevention device is intended to keep contaminated water from flowing back from the distribution system into the well when the pump is shut off.The development of multi-benefit land use practices that reconcile the needs of human societies with ecosystem function are critically important to biodiversity conservation given human population growth and the concurrent expansion of terrestrial land surface dedicated to agriculture. Accordingly, reconciliation ecology, which is the practice of encouraging biodiversity in the midst of human dominated ecosystems by specifically managing the landscape for the benefit of fish and wildlife has become an increasingly important component of global conservation efforts. This is especially true in freshwater habitats which constitute less than 1% of Earth’s land surface yet support freshwater fish species that make up approximately one third of all known vertebrates and where loss of biodiversity appears to be more rapid than in any other habitat type. Even among imperiled freshwater habitats, rivers and their associated floodplains stand out as among the most altered ecosystems in the world. They are also among the most desirable and agriculturally productive landscapes globally and therefore ideal locations for case-studies on innovative reconciliation ecology-inspired, multi-benefit land use innovations. Furthermore, these lands are managed to perform economically valuable functions of human food production and flood risk mitigation while simultaneously providing critical ecosystem benefits such as nutrient cycling, aquifer recharge, habitat creation, rolling grow table and conservation of biodiversity in heavily altered landscapes. Managing agricultural floodplain habitats in ways that approximate natural riverine processes re-exposes native species to physical habitat conditions similar to those to which they are adapted and may therefore enhance fitness and survival. To date, most North American work to reconcile working agricultural floodplain farmlands with the needs of wildlife has focused on waterfowl conservation. However, in Asia fish have been reared in rice fields for thousands of years, providing a valuable protein resource, natural fertilizer for agricultural fields, and refugia/food for native fishes. This paper explores means by which fish conservation can be integrated into the management of actively farmed rice fields on the agricultural floodplains of the Sacramento Valley, California. Chinook Salmon are in steep decline throughout California. A conservative pre-European establishment fish population estimate in the Central Valley was 2 million annual adults returning to spawn, which sustained a sizable commercial ocean fishery.

Prior to the mid-1800s, California’s Central Valley was estimated to contain more than 4 million acres of seasonal floodplain and tidal wetlands which provided abundant food resources for rearing juvenile Chinook Salmon. Of the historic wetland habitats in California, approximately 95% of floodplain habitat has been disconnected from rivers by levees and channelization, drastically reducing quality rearing conditions for out-migrating salmon. Though most of the historical alluvial floodplain in California is now inaccessible to salmon, some productive seasonal wetlands persist, presenting opportunities for conservation. In particular, winter-flooded rice fields within the Sacramento Valley flood protection bypasses–flood ways which route floods away from cities and which are designed to drain floodwaters rapidly in order to accommodate agricultural production–hydrologically connect to the river and can be managed to promote environmental conditions that resemble natural off channel habitat. Use of existing berms and water control structures used in rice propagation to prolong the duration of floodplain inundation on these managed floodplain wetlands during the winter and early spring seasons approximates the long-duration inundation of floodplains that typically occurred on Central Valley floodplains prior to the widespread wetland reclamation and levee construction in the 19th and 20th centuries. Inundation duration of several weeks facilitates the development of highly productive invertebrate food webs and improved foraging opportunities for fish. Chinook Salmon reared in floodplain and off channel habitats experience more rapid growth rates compared to those rearing in adjacent leveed river channels rivers due to more abundant invertebrate prey. For anadromous salmonid species such as Chinook Salmon improved growth during the freshwater juvenile stage is correlated with larger size at ocean entry and increased survivorship to adulthood. While the potential benefits to juvenile Chinook Salmon rearing on flooded bypasses is well established, there is little published research testing methodologies for establishing the optimal physical and biological conditions to achieve maximal benefit on these managed floodplains. Such is the primary goal of this study: to compare potential management practices intended to enhance the habitat benefits to juvenile Chinook Salmon of winter-inundated, post-harvest rice fields on the Yolo Bypass floodplain of the Sacramento Valley of California. This paper reports results from work conducted on a 7.3-hectare agricultural floodplain laboratory over four consecutive years beginning in 2013 and ending in 2016. Studies were built on an adaptive framework in which each year’s results are used to refine experimental approaches in subsequent field seasons. Listed sequentially, annual investigations included studying the effects of 1) post-harvest field substrate; 2) depth refugia; 3) duration of field drainage; and 4) duration of rearing occupancy on in-situ diet, growth and survival of juvenile salmon. It is our hope that the data produced by these controlled, field-scale experiments will inform farm, water, and flood resource managers as they continue to develop multi-benefit land use practices designed to improve habitat quality for salmon and other native fishes of conservation concern provided California’s system of water supply and flood protection infrastructure.Experiments took place in the Yolo Bypass, a 24,000-ha flood bypass along the Sacramento River in California, USA. Nine 0.81-ha replicated fields were constructed on Knaggs Ranch—a farm predominantly producing rice . An inlet canal routing water from the Knights Landing Ridge Cut canal independently fed each of the nine fields, and all fields drained into an outlet canal. The outlet canal ultimately emptied into the Tule Canal, which runs north to south along the east side of the bypass. Each field had rice boxes on the inlet and outlet of each field.

There are three steps to fitting the parametric density function to the farm size variables

As outlined in Sumner and Leiby and Sumner , the human capital element remains prevalent through economic explanations of farm exit. Of course, age of the farm operators plays key role. Macdonald et al. discuss the role of the advanced age of many dairy farmers and the fact that many dairy farms are family-run, suggesting that there will be an increase in exits as more farmers choose to retire. Furthermore, the study relates the probability of exit to farm size, finding that not only does the age of the operator increase the likelihood of exit, but the smaller the farm size also increases the probability of exit.This section discusses the sample used in this analysis and details changes in the COA questions that are relevant to this analysis. The research utilizes COA data from 2002, 2007,2012, and 2017 for six select states: California, Idaho, New Mexico, New York, Texas, and Wisconsin. The results presented have gone through a disclosure review process and no data on individual/farm-specific is specific to individual farms and instead characterizes them more generally. Although the COA is federally mandated, it does not collect data on every U.S. farm and as such weights responses to create the most accurate sample that reflects the true U.S. farm sample. As discussed in Chapter 2, I use a specific definition of a commercial dairy in order to capture dairies with significant engagement with the dairy industry. A commercial dairy for the purposes of this analysis is defined as a farm with at least 20 milk cows on the farm as of December 31 of the Census year and the farm must have dairy or milk sales revenue above the dollars of milk sale revenue that would have been generated by 30 milk cows.

The survey questions asked of farmers and ranchers by the COA change slightly every Census round, hydroponic flood tray although most remain the same across time. Below are descriptions of question changes for relevant variables to the analysis. First, in 2002 and 2007, farms were asked for the total amount of dairy sales in that year, but in 2012 and 2017, this question was dropped and replaced with the total amount of milk sales. Furthermore, whether the dairy farm had any level of organic production was only asked in 2007, 2012, and 2017. Second, operator characteristic questions have become more detailed over the years and allowed more information about operators to be collected. In 2002, 2007, and 2012, the COA asked detailed operator characteristic questions about up to three operators, but only one operator was identified as the principal operator. In 2017, the COA expanded its detailed operator questions to include up to four operators and allowed for up to four operators to be identified as principal operators. In this Chapter, the operators for which the number per farm is limited and detailed information is provided will be referred to as the “core operators.”There is other no limit to the number other operators listed per farm and only the gender of each such operator and the total number per farm are provided in the Census. The COA has three potentially relevant farm size variables for dairy farms, the number of milk cows, the value of farm production, and the value of milk or dairy sales. I utilize all three in this chapter. However, I focus particular attention on the number of milk cows for the kernel density graphs. I characterize the distributions of number of milk cows per commercial dairy farm using two approaches. One approach is to fit a nonparametric distribution by year, and by state for each year to the data on milk cow herd size per farm. The other approach is to fit two commonly used parametric distributions to characterize dairy farm size distributions for the national and individual states over census years.

One aim of my thesis is to characterize the farm size distribution of dairy farms and fitting parametric density functions serves as a starting point for characterizing and analyzing dairy size distribution. As explained above, there is previous literature that utilizes parametric distributions to characterize farm size and this research provides evidence that commonly used distributions do not fit well with the U.S. commercial dairy industry. It is common in farm size analysis to fit parametric density functions to characterize farm size distribution . I create kernel density plots for the herd size distribution by state across the years and then find and fit two common parametric density functions to the distribution. This section will be structures as follows: a brief overview of the mathematics used in fitting parametric density functions. First, I hypothesize based on the kernel density plots what distributions seem reasonable. For this analysis I use the log normal and the exponential function, as those are two common distributions used in farm size literature and are likely shapes for most farm size distributions. Lognormal is the typical selection, as it is referenced in Gibrat’s Law. The exponential distribution was selected because it can account for the same skewed shape but has more flexibility. Second, I estimate the parameters of interest needed to form that distribution in order to create an estimated distribution of random numbers that follow the specific distribution. For this analysis, the measures of farm size, the number of milk cows for each farm, are random variables x1, x2, x3, …, xn, where n is the sample size of farms, for which the joint distribution depends on distribution parameters. For example, using the log normal the parameters are the mean and variance, and there are two related parameters for the exponential distribution.

The estimates of the parameters are functions of the milk cow herd size variable in question. From there, we can calculate the estimates of these parameters to create a different distribution with those same parameters and compare them to the actual distribution of the number of milk cows. Some estimated parametric distributions appear to have slight irregularities, this is due to the number of observations and the impose parameters.This section will summarize the resulting farm size graphs and detail the trends across time and states. Overall, when looking at the six select states together commercial dairy farm distributions have shifted towards larger dairies. In 2002, there was a clear peak in the number of farms with less than 200 milk cows, but the peak falls significantly from 2002 to 2017 . Whereas farm size distribution shows a clear increase in the farms with larger herd sizes in 2017. Although this graph gives interesting detail about the trends in herd size for the U.S. overall it is mostly characterized by Wisconsin and New York which have a significantly larger share of the number of commercial dairies and tend to have smaller herd sizes relative to other states. This graph clearly shows that there remains a large share of dairies that have a herd size of less than 200 milk cows, despite the relative shift in herd size.Moving to state-specific trends, overall California dairies have had larger herd sizes than other states, such as New York or Wisconsin across all years . California had a peak in the share of dairies with less than 1,000 milk cows from 2002 to 2017, but the peak fell significantly between 2007 and 2012. There was a clear shift in 2012 with an increase in the 1,000 to 2,000 milk cow herd size in 2012 and then another shift in 2017 in the 2,000 to 3,000 milk cow herd size. This documents a clear movement of California dairies towards larger herd sizes and a decrease in smaller herd sizes. Idaho had a large peak in commercial dairies with less than 500 milk cows in 2002 and then a significant drop in that peak in 2007 with smaller subsequent decreases in 2012 and 2017 . Interestingly, in 2007 there was an increase in the number of dairies with a milk cow herd size between 500 to 1,000, hydro flood table but then a subsequent decrease the following year. In 2017 there was a clear increase in the number of commercial dairies with a milk cow herd size between 1,500 and 2,000. New Mexico had one of the more unique herd size distributions with no clear peak in the smaller herd size ranges . From 2002 to 2007, there was a clear drop in the density of commercial dairies with less than 1,000 milk cows and a relative increase in the density of commercial dairies with 1,000 milk cows. Then in 2012, there was a shift towards commercial dairies with more than 2,000 milk cows and a downward shift in commercial dairies in the 500 to 1,000 milk cow herd size range. This trend continued in 2017 with even further shifts in each direction. From 2002 to 2017, New York has seen a slight decrease in the smaller herd sizes and a little increase in the larger herd sizes .In Texas, the most distinct trend was a significant drop in the density of commercial dairies with herd sizes of less than 500 milk cows between 2012 and 2017 .

There has previously been a trend of decreases in this herd size range, but these follow a similar pattern as compared to most other states. However, in other states, there was not such a significant drop. In 2017, there was an increase in commercial dairies with more than 1,000 milk cows. There was a significant decrease the commercial dairies in Wisconsin with less than 100 milk cows from 2007 to 2012 and then again from 2012 to 2017 . In 2017, there was an increase in commercial dairies with a herd size of between 150 and 200 milk cows. Wisconsin’s dairy industry is characterized by a significant number of smaller dairies and few dairies with large milk cow herd sizes. Across the states, there is a trend of consolidation with few commercial dairies and an increase in the number of dairies with larger herd sizes. Despite the decrease in the number of farms in each state, the number of milk cows increased in some states and broadly remained relatively stable . California had a 6.7% increase in the number of milk cows from 2002 to 2017, but Idaho had a 55% increase. The number of milk cows in New Mexico and Wisconsin both remained roughly the same. There was a 6% decrease in the number of milk cows in New York and number of Texas grew by more than 70%. Neither of the two parametric distributions fit the national data well. In particular, both the log normal and exponential distributions failed to capture the very high mode at the low herd size in 2002. The herd sizes in California did not fit any distribution well in 2002 or 2017 . Idaho has a large peak in the smaller ranges that is well above either the log normal or the exponential distribution in 2002 . The herd size does fall significantly when looking at the Idaho herd size distribution in 2017, this does some what follow a log normal pattern, but not very well . New York follows a similar pattern with the smaller herd size peak being significantly higher than either the log normal or the exponential peaks in 2002 or 2017 . As we saw across years in Texas, the herd size shifted dramatically. In 2002, the herd size distribution slightly resembled a log normal trend but had definite deviations and in 2017 did not follow any distribution well . Wisconsin follows a similar pattern to New York with no clear distribution trend in 2002 or 2017, but with significantly high peaks in the lower herd size range that deviate from the distributions.As explained above there are several possible influences, but given the Census data, I have chosen the following variables: characteristics of the operators , farm sales diversification across commodities, and share of farm operators who have off-farm employment. I also account for state fixed effects and Census year fixed effects. Clearly sales diversification and off farm work are jointly determined with dairy farm size, so I do not claim to be measuring a causal impact in the regressions presented discussed in this section. The aim here is to discuss statistical relationships between these characteristics and the farm size measures because although they cannot be thought of as directly influencing farm size the relationship between such measures is of interest and allows for discussion about the characteristics of the U.S commercial dairy.

The liquid manure samples collected from flushed pits were termed as Flush Manure

However, the impact of animal waste-borne microbiomes on environment including soil, water, and plant is not well-understood. The United State Department of Agriculture estimates that there are approximately 450,000 animal feeding operations in the U.S., which include beef cattle, dairy, poultry and swine production industries. Annually, over 2 billion tons of animal manure are generated in the U.S. In California alone, 60 million tons of manure are produced annually by 5.2 million cattle and calves, and a considerable portion of the manure is applied onto cropland as fertilizers. While the use of manure as fertilizer in cropland has numerous benefits, such as reducing chemical fertilizer application, additional understanding of how animal waste-borne microbiomes could impact cropland and public health is needed to utilize the full potential of manure and to understand any consequential negative impacts of manure on cropland and environment. Elevated pathogen/pathogen indicator levels in surface and ground water and their potential linkages with animal waste have received considerable public attention because of associated public and animal health risks and produce contamination. In general, the use of fresh and untreated manure as fertilizers has a greater potential to increase pathogen loads in cropland, and subsequently these bacterial populations can be transported to rivers and streams during rainfall/runoff events. Furthermore, the use of untreated manure as fertilizer can facilitate the transfer of harmful bacterial population to ready-to-eat crop. To control the bacterial loads in manure used as fertilizer, grow rack several manure treatment practices such as composting, anaerobic lagoon systems, and anaerobic digestions are used.

Previous studies showed that pathogens such as E. coli, Salmonella, and Listeria in dairy manure are reduced through the application of these waste treatment processes, though the complete elimination of these pathogens by these processes is uncertain. Further, the existing knowledge is weak in terms of changes in microbial community in manure after various manure handling processes, such as solid-liquid separation, manure piling, and storage lagoon. In a typical, large California dairy, both liquid and solid manure are produced by flush manure management systems, which are common in California’s Central Valley. In such systems, a dairy barn is flushed with water, and flushed manure is passed through solid-manure separation systems, where liquid manure is separated from solid . Solid streams are stored in the form of piles, and liquid manure streams are stored in lagoon systems prior to the application of manure into the cropland as fertilizers. Although both lagoon systems and compost piles are used extensively to manage dairy manure in California, the efficacies and effectiveness of these manure handling processes for regulating microbial population are not well-understood. For these practical reasons that have considerable impact on agriculture, manure management in dairy farms, and its application in cropland, we hypothesized that microbial quality of dairy manure should change with on-farm manure handling/treatment processes. Moreover, this change in microbial population should be consistent from one farm to another. Such changes or shifts in microbial population of manure—and continuous use of that manure as fertilizer in a cropland for long period—have potential to impact the microbiome of cropland receiving manure as fertilizer. Therefore, the understanding of how the dominant bacterial community levels changes in typical dairy manure management practices in a farm environment is essential.Although numerous previous studies targeted investigation of the inactivation of selective bacterial pathogens such as E. coli, Salmonella, Listeria under specific conditions, these studies mainly focused on understanding of selective human pathogen inactivation in various treatment processes.

The insights of how various microbial populations at genus level change in particular processes are crucial, however, not well-reported. Further, having such information can help improve the currently-available manure management techniques, and support decision-making in terms of using manure as fertilizer in a specific cropland. Previous studies have used high-throughput microbial community profiling methods to gain insights into microbial community distribution in different environments. Amplicon-based community analysis has been used to determine the microbial communities in various samples, including food samples, anaerobic sludge, biosolids, natural environments, and agricultural grasslands. These methods have also been applied in raw dairy manure . However, the application of these methods to understand the microbial communities of manure fertilizers processed at various levels of treatment has not been explored yet. As a test of our hypothesis to determine the differences in microbiome of manure fertilizers, the goal of this study was to quantify the microbial population levels of various forms of dairy manure, such as liquid and solid produced in a typical dairy farm environment in California Central Valley. The objectives of the study are to: 1) determine the dominant microbial communities in solid and liquid forms of dairy manure fertilizer; and 2) understand the changes in microbiome of manure after solid-liquid separation, lagoon storage, and manure piling . We anticipate that the outcomes of this study will reveal greater understanding in terms of microbial quality of dairy manure fertilizers, and will help in making informed decisions. Further, improved insights will help in advancing dairy manure management, manure application, and understanding the environmental and public health risks associated with animal waste-borne microbial pathogens.The solid and liquid samples in dairy farms were collected from California Central Valley, which has the most densely populated dairy farms in California. Fig 2 shows county maps of Tulare, Glenn, and Merced, including the herd size in Tulare and Merced Counties.

For the current study, we collected 33 manure samples, which include solid and liquid manure. Solid manure samples were collected from manure piles located in dairy facilities, while liquid samples were collected from liquid manure storage ponds  as well as from flushed manure pits.Dairy facilities used for sample collections are located in three counties . From each dairy facility, we collected one liter of liquid manure sample in sterile bottles from each pond, and 600 g of solid manure in sterile bottles from each pile. Immediately after collection, samples were transported using a cooler and subsequently stored at -20˚C prior to analysis. For analysis, samples were thawed at room temperature. The solid samples collected from piles that were less than 2 weeks old were termed as Fresh Pile , while older piles were termed as Compost Pile . It is important to note that the studied CP does not necessarily mean the sample was subjected to standard composting processes, hydroponic rack where maintaining the thermophilic temperature and mixing is necessary. The liquid manure samples collected from Primary Lagoons and Secondary Lagoons were termed as PL and SL, respectively.The microbial diversity assessment of solid and liquid wastes using phylotype taxonomy resulted in a total taxa of 1818. Approximately 85% of 1818 taxa were classified at the genera level and 10% at the family level. In FP solid samples, sequence reads varied from 13,950 to 453,625 with an average of 153,316. In solid samples from CP pile, sequence reads varied from 15,798 to 1,092,032 with an average of 296,153. The average reads for FM were 333,450 with range from 5,242 to 989,040. In PL and SL liquid samples, the average sequence reads were 186,341 and 130,888 , respectively. FP samples, which were not dried and composted, showed the abundance of bacteria of genus Acinetobacter and Enterococcus; these bacteria were the most common and accounted for 3.5% – 39.53% and 4.8% – 11.86%, respectively. In CP samples, which were either dried or composted, the proportion of bacteria of genus Acinetobacter ranged from 18.3% to 19.2%. Other abundant species in CP samples were Flavobacteriaceae, Bacillaceae, Pseudoxanthomonas, Clostridia, and Sphingobacterium, accounting for 3.9–24.9%, 7.2–7.3%, 2.8–5.5%, 4.5–6%, 2.1–3.1%, respectively. Other abundant species in FP samples were Bacteriodetes, Trichococcus, Clostridiales, Flavobacterium, and Psychrobacter. A heat map of the top 50 taxa in total of 1818 taxa is shown in Fig 3. In FM samples, the most common species were Ruminococcaceae varying from 7.2 to 13.1%. Species such as Bacteroidetes and Clostridium varied 4.2–11.5% and 3.5– 9.7%. In lagoon samples , however, the most common species were Bacteroidetes, Flavobacteriaceae, and Psychrobacter accounting for 11.1–15.9%, 3.3–13.1%, and 19.3–29.6%, respectively. Dendrograms and PCA plots are shown in supplementary figures . The application of algorithm based on the abundance criteria resulted in 128 taxa, and the analysis of top 50 communities in 128 taxa is shown in Fig 4. The dendrogram and PCA of CP, FM, FP, PL, and SL are shown in Fig 4A and 4B, respectively. The heat map shows the distribution of microbial communities in CP, FM, FP, PL, and SL. In the dendrograms, the horizontal axis represents the distance of dissimilarity between clusters. The vertical axis represents objects and clusters. Results showed that FM is more similar to PL than SL. Furthermore, the similarity between FM and PL was greater than SL and CP. The frequency and abundance in 128 taxa of solid manure samples are shown in supplementary file , and these characteristics for liquid samples are shown in supplementary table .Sample grouping tendency in 128 taxa was evaluated using PCA. The PCA score plot shows a two-dimensional plot of 33 samples. The first two principal components explained 56.7% of the total variance in the microbial community composition. FP and CP samples were clustered together mostly in the upper and lower left corner of the plot, while FM, PL, and SL were clustered together in the lower left of the plot. As shown in the figure, CP and FP groups were similar to each other and distinct from PL, SL, and FM. Further, PL and SL were grouped together. The clear separation of CP and FP from PL, SL, and FM indicates that manure handling processes such as solid-liquid separation adapted in dairy farms have the potential to alter the microbial communities in the manure. Together, these results demonstrate that the forms of manure fertilizers will affect the microbial quality of manure fertilizers, which prove the central idea of our hypothesis.

The abundance of genera in each sample is shown in a heat map . In the heat map, the light blue indicates low abundance and dark red indicates high abundance. The results of top 50 genera present in samples indicate that species abundance differs among samples. InFP, the most abundant species such as Acinetobacter, Psychrobacter, and Enterococcus accounted for 9.9%, 2.5%, and 2.5%, respectively. The other unclassified bacteria in FP accounted for 31%. In CP, the most abundant species include Planifilum, Acinetobacter, and Flavobacteriaceae accounted for 6.4%, 6.0%, and 4.4%, respectively. The unclassified bacteria in CP were 25.6%.To understand the distinction among microbial communities in solid and liquid samples, the data of liquid samples and solid samples were grouped separately. The dendrogram plot showed clustering among solid and liquid samples . Results indicated that all the liquid samples were more similar to each other than solid samples. The PCA plot displayed that the solid samples were mostly clustered to the left side, while the liquid samples were clustered to the right side, indicating a clear separation among liquid and solid samples. The top 50 genera in 128 taxa are presented in heat map . Results showed that the top 22 species listed at the top of heat map were more abundant in solid samples compared to the liquid. These species include Bacteroidetes , Ruminococcaceae , Flavobacteriaceae , Clostridium , Cloacibacillus , Petrimonas , Psychrobacter , and Proteiniphilum . The bottom 28 species were more abundant in liquid samples compared to solid samples, and these microbial communities include Smithella , Pseudomonas , Sporobacter , Treponema , and Aminivibrio . Based on the canonical analysis , the genus Gp4, Nocardioides and Caryophanon were highly correlated with solid manure, while the genus Succiniclasticum, Porphyromonas, Methanospirillum, Anaeroplasma, Armatimonadetes, Eubacterium, Vampirovibrio, Anaerovorax and Lactonifactor, and the family Porphyromonadaceae were highly correlated with liquid manure . Canonical values from the discriminate analysis were also used to identify bacteria that were highly correlated and led to differentiation of FP vs CP, and FM vs LM . Bacteria genus Coraliomargarita was highly correlated with FP and genus Ruania and family Peptococcaceae were highly correlated with CP. From this analysis, we observed that the genus Bifidobacterium, Murdochiella, Nitrosomonas, Arcanobacterium, Gallicola, and Kurthia were highly correlated with FM. Overall, the comparison of FM and LM had a more similar microbial composition and diversity than the comparison of FP and CP.

Categorization of land types such as farms and forests was done manually using satellite imagery

Calibrating on the space utilization data of this study, such models could become more realistic in terms of transmission dynamics. They could provide more accurate and precise estimations to tackle infectious diseases cost-effectively. The study has several limitations. It was a pilot study and had a limited sample size. Most participants were adult males. Potential female participants said they rarely go beyond village boundaries and thus were not eligible to be included in the study. It may have introduced a selection bias, but it points to the fact that the mobility preferences between the two genders were too different that they were essentially two different populations requiring separate analyses. The most commonly reported occupation was farming and most people in this study area, indeed, farm for at least part of the year. However, people in the study area usually perform different types of work according to the season and assigning a single occupation to a person may not be appropriate. Employment in this region is almost entirely informal, and most working-age men will work in agriculture for part of the year and in other types of labor during other parts of the year. Responses to surveys about employment will therefore vary by the time of year, even within a single research participant. While we believe that this cohort is representative of adult males in this setting, more studies that are demographically representative of rural villages in this setting could be useful for understanding differences in travel patterns by age and gender. Mobile GPS devices have their own limitations. As explored in the Extended data: Figure S2, flood tray their readings could be inaccurate. Because of their small size, their battery capacity was limited. During the study period, participants may have failed to carry the GPS device . Mechanical failures may also cause problems in data collection.

Even though the utmost care was taken to preserve data integrity, there could be errors and bias from data collection or data manipulation. While the categories do match our authors’ understanding of the area, no validation was done on the ground after categorization for this analysis. Our estimation of home location as the median center of all the GPS points where the participant spent the night, each of which in turn is derived from the last GPS point of the day between 6pm to 12 midnight, may not be robust enough to capture the actual home location. This could be overcome by having the field supervisors record each participant’s home location with a GPS device in the future studies. Categorization of home area may be too wide to discern land use that is very close to home. Finally, the estimation of land utilization regardless of the method used is imperfect. Having two consecutive GPS points to constitute usage of the land area provide too crude a result . While the BRB method provide more accurate and precise estimates , it is not without its caveats. The BRB approach assumes that consecutive points that were more than three hours apart were uncorrelated. Since the GPS logger went into sleep mode while stationary, the current land utilization estimation under-estimates the time spent motionless and hence resulting in lower usage of home in Extended data: Figure S6 compared to that in Figure 2.Starting as the technologies of the rural electrification program, the electricity generating wind turbines has been developed to one of the biggest renewable energy power production facilities on the planet. Latest estimates from the International Renewable Energy Agency shows that onshore wind is already at grid parity with fossil fuel electricity .

According to latest reports of the Energy Information Administration of the U.S. Department of Energy and annual energy report of the European Commission, the levelized cost of energy for the onshore wind falls within a range of $0.04 – $0.10 per kWh, making them extremely cost-competitive with conventional power sources as coal, integrated gasification combined cycle and nuclear energy . Moreover, the wind energy is the fastest growing power production sector. To provide a comparison: during the period of 2000 to 2012, the installed capacity from nuclear power plants only increased by 9 GW, while the increase for wind power was 266 GW and around 100 GW for solar power plants. Further, the wind turbines technologies have the largest remaining cost reduction potential which can be achieved through advanced research and developments. During the last several decades engineers and scientists put significant effort into developing reliable and efficient wind turbines. Since the 1970’s most of the work focused on the development of horizontal-axis wind turbines . Vertical-axis wind turbines were generally considered as a promising alternative to HAWTs. Before the mid-90’s VAWTs were economically competitive with HAWTs for the same rated power. However, as the market demands for electric power grew, VAWTs were found to be less efficient than HAWTs for large-scale power production. In recent years the offshore wind energy are getting increased attention. The total global installed capacity of offshore wind reached 4.1GW at the end of 2011. Far from the shore energy can be harvested from stronger and more sustained winds. Also, the noise generation and visual impact is no more a limitations in turbine designs. In the offshore environments large-size HAWTs are at the leading edge. They are equipped with complicated pitch and yaw control mechanisms to keep the turbine in operation for wind velocities of variable magnitude and direction, such as wind gusts.

One of the most challenging offshore wind turbine designs is a floating wind turbine. Starting 2009, the practical feasibility and per-unit economics of deep-water, floating-turbine offshore wind was seen. The world’s first floating full-scale offshore wind turbine has been lunched in the North Sea off the coast of Norway by Norwegian energy giant StatoilHydro in 2009. The turbine, known as Hywind, rests upon a floating stand that is anchored to the seabed by three cables. Water and rocks are placed inside the stand to provide ballast. The world’s second full-scale floating wind turbine, named WindFloat, was designed by Principle Power and lunched in 2011 by the coast of Portugal. In 2013, as a part of US Department of Energy’s Wind Program, the VolturnUS, first offshore wind turbine in Americas was powered up to provide electricity. Later, the same year, Japan switched on the first floating turbine at a wind farm 20 kilometeres off the coast of Fukushima. Up-to-date there are many projects on building the floating wind turbine farms in Asia, Europe and Americas. Moreover, wind-energy technologies are maturing, and several studies were recently initiated that involve placing VAWTs off shore, such as DeepWind project by Riso DTU National Laboratory for Sustainable Energy and other. As the problem remains for large-scale wind turbines, especially offshore, with grid connection and energy storage, the urban areas, closer to direct consumer become very attractive. Recently VAWTs resurfaced as a good source of small-scale electric power for urban areas. There are two main configurations of VAWTs, employing the Savonius or Darrieus rotor types. The Darrieus configuration is a lift-driven turbine: The power is produced from the aerodynamic torque acting on the rotor. It is more efficient than the Savonius configuration, which is a drag-type design, 4×8 grow tray where the power is generated using momentum transfer. The main advantage of VAWTs over the HAWTsis their compact design. The generator and drive train components are located close to the ground, which allows for easier installation, maintenance and repair. Another advantage of VAWTs is that they are omidirectional , which obviates the need to include expensive yaw control mechanisms in their design. However, this brings up issues related to self-starting. The ability of VAWTs to self-start depends on the wind conditions as well as on airfoil designs employed. Studies in reported that a three-bladed H-type Darrieus rotor using a symmetric airfoil is able to self-start. In the author showed that significant atmospheric wind transients are required to complete the self-starting process for a fixed-blade Darieus turbine when it is initially positioned in a dead-band region defined as the region with the tip-speed-ratio values that result in negative net energy produced per cycle. Self-starting remains an open issue for VAWTs, and an additional starting system is often required for successful operation. As a wind power production demands grow, the wind energy research and development need to be enhanced with high-precision methods and tools. These include time-dependent, full-scale, complex-geometry advanced computational simulations at large-scale. Those, computational analysis of wind turbines, including fluid-structure interaction simulations at full scale is important for accurate and reliable modeling, as well as blade failure prediction and design optimization. Due to increased recent emphasis on renewable energy, and, in particular, wind energy, aerodynamics modeling and simulation of HAWTs in 3D has become a popular research activity. FSI modeling of HAWTs is less developed. Accurate and robust full-machine wind-turbine FSI simulations engender several significant challenges when it comes to modeling of the aerodynamics. In the near-tip region of the offshore wind turbine blades the flow Reynolds number is O, which results in fully-turbulent, wall-bounded flow. In order to accurately predict the blade aerodynamic loads in this regime, the numerical formulation must be stable and sufficiently accurate in the presence of thin, transitional turbulent boundary layers.

Recently, several studies were reported showing validation at full-scale against field-test data for medium size turbines, and demonstrating feasibility for application to larger-size offshore wind-turbine designs. However, 3D aerodynamics and FSI modeling ofVAWTs is lagging behind. The majority of the computations for VAWTs are reported in 2D, while a recent 3D simulation in employed a quasi-static representation of the air flow instead of solving the time-dependent problem. The aerodynamics and FSI computational challenges in VAWTs are different than in HAWTs due to the differences in their aerodynamic and structural design. Because the rotation axis is orthogonal to the wind direction, the wind-turbine blades experience rapid and large variations in the angle of attack resulting in an air flow that is constantly switching from being fully attached to being fully separated, even under steady wind and rotor speeds. This, in turn, leads to high-frequency and high-amplitude variations in the aerodynamic torque acting on the rotor, requiring finer mesh resolution and smaller time-step size for accurate simulation. VAWT blades are typically long and slender by design. The ratio of cord length to blade height is very low, requiring finer mesh resolution also in the blade height direction in order to avoid using high-aspect-ratio surface elements, and to better capture turbulent fluctuations in the boundary layer. High-fidelity modeling of the underlying aerodynamics requires a numerical formulation that properly accounts for this flow unsteadiness, and is valid for all flow regimes present. It is precisely this unsteady nature of the flow that creates significant challenges for the application of low-fidelity methods and tools to VAWTs. Another challenge is to represent how the turbulent flow features generated by the upstream blades affect the aerodynamics of the downstream blades. The VAWT simulation complexity is further increased when several VAWTs are operating in close proximity to one another. Due to their compact design, VAWTs are often placed in arrays with spacing that is a little over one diameter between the turbine towers. In [1], this type placement was found beneficial for increased energy production. When the FSI analysis of VAWTs is performed the simulation complexity is further increased. As can be seen in the flexibility in VAWTs does not come from the blades, which are practically rigid , but rather from the tower itself, and its connection to the rotor and ground. As a result, the main FSI challenge is to be able to simulate a spinning rotor that is mounted on a flexible tower.In order to account for addressed challenges, the FSI formulation should be robust, accurate and efficient for the targeted class of problems. The FSI framework used in current work was originally developed in [37, 38]. The aerodynamics formulation makes use of FEM-based moving-mesh ALE-VMS technique combined with weakly-enforced essential boundary conditions. The former acts as a turbulence model, while the latter relaxes the mesh size requirements in the boundary layer without sacrificing the solution accuracy.

The software runs in Linux operating systems and has several functionalities that are useful to the user

As discussed in Tennekes and Lumley , planar mixing layers are self-preserving so that mean velocity and turbulent stress profiles become invariant with respect to a horizontal distance . Hence, such profiles can be expressed in terms of local length and velocity scales. The self-preservation hypothesis is consistent with numerous observations that turbulent mixing close to the canopy–atmosphere interface exhibits a number of universal characteristics. Among these universal characteristics are the emergence of friction velocity and canopy height as representative velocity and length scales . In addition to exerting drag on the airflow aloft, canopy foliage acts as a source for many passive, active, and reactive scalar entities such as heat , water vapor , ozone , and isoprene resulting in a ‘‘scalar mixing layer.’’ A scalar mixing layer is a generalization of a thermal mixing layer, which is defined as a plane layer formed between two coflowing streams of same longitudinal velocity but different temperatures . Unfortunately, theoretical and laboratory investigations demonstrate that pure thermal mixing layers are not self-preserving . In fact, laboratory measurements of grid-generated turbulence in a thermal mixing layer by LaRue and Libby and Ma and Warhaft illustrate that computations utilizing self-preservation assumptions overestimate the maximum measured heat flux by more than 25%. Hence, instabilities responsible for momentum vertical transport may not transport scalars in an analogous manner. Given the complexities of mean scalar concentration profiles within forested systems , the dissimilarity in scalar and momentum sources and sinks, grow tray the large atmospheric stability variation close to the canopy–atmosphere interface, and the complex foliage distribution with height all suggest a need to further investigate the applicability of Raupach et al.’s ML analogy to scalar mass transport.

The objective of this study is to investigate whether the characteristics of active turbulence, typically identified from vertical velocity statistics, can be extended to mass transport at the canopy–atmosphere interface. In Raupach et al. , particular attention was devoted to eddy sizes responsible for the generation of coherency and spectral peaks in the vertical velocity. Here, eddy sizes responsible for cospectral peaks of scalar fluxes and their relationship to active turbulence are considered. Active turbulence is identified from orthogonal wavelet decomposition that concentrates much of the vertical velocity energy in few wavelet coefficients. The remaining wavelet coefficients, associated with ‘‘inactive,’’ ‘‘wake,’’ and ‘‘fine scale turbulence’’ are thresholded using a Lorentz wavelet approach advanced by Vidakovic and Katul and Vidakovic . Wavelet spectra and cospectra are also used to investigate the characteristics of active turbulence for the two stands and for a wide range of atmospheric stability conditions. Much of the flow statistics derived by Raupach et al. in the time domain are also extended to the wavelet domain. Since canopy sublayer turbulence is intermittent in the time domain with defined spectral properties in the Fourier domain, orthonormal wavelet decompositions permit a simultaneous time–frequency investigation of both flow characteristics.The use of utility All-Terrain Vehicles as working machines adds a heavy burden to the American public health system . According to data from the 2019 National Electronic Injury Surveillance System, over 95,000 emergency department visits were due to an ATV-related incident. Around 36.8 % of those ED visits involved youth younger than 18 years old, and 15.3 % of the incidents happened on farms or ranches . Indeed, using utility ATVs in the farm setting is extremely dangerous for youth; ATVs are one of the most frequently cited causes of incidents among farm youth .

ATVs have three or four low-pressure tires, narrow wheelbase, and high center of gravity . Due to safety concerns, the production of three-wheelers ceased in the United States in 1987 . Three-wheelers were known to be even more prone to rollovers than four-wheeled ATVs . Utility ATVs and sport models have several design differences. Utility models have higher ground clearance, stronger torque for hauling and towing, rear and front racks for carrying loads or mounting equipment, a hitch to pull implements, and heavyer weights . Accordingly, utility ATVs are more suitable and more commonly used for tasks in agricultural settings. Therefore, in this study, agricultural ATVs are defined as utility ATVs used on farms and ranches. Agricultural ATVs have heavy weights and fast speeds that require complex maneuvering. Youth’s physical capabilities may not be sufficient to perform those complex maneuvers correctly. In fact, many studies have shown that youth are more vulnerable to injuries than adults because of their less developed physical capabilities and psychological and behavioral characteristics , which likely affect their ability to safely operate agricultural vehicles . Furthermore, previous studies have shown that ATV-rider misfit is another important risk factor . Despite compelling evidence showing that utility ATVs are unsuitable for youth, the most popular guidelines for ATV-youth fit disregard the rider’s physical capabilities. Instead, those recommendations are based on the rider’s age , vehicle’s maximum speed , vehicle’s engine size , or farm machinery training certificate . For instance, youth as young as 14 can operate utility ATVs while employed on non-family-owned farms if they receive training through an accredited farm machinery safety program, such as the National Safe Tractor and Machinery Operation Program . The NSTMOP training includes tractor and ATV education, where students must pass a written knowledge exam and a functional skills test to receive a certificate .

Nevertheless, programs such as the NSTMOP lack appropriate coverage of specific ATV-related subjects, such as active riding and physical matches of ATVs and youth. If the ATV is not fit to the rider, they will likely be unable to properly operate the ATV’s controls, which increases their chance of incidents and consequently may lead to injuries and fatalities. In addition, the traditional guidelines adopted to fit ATVs for youth are inconsistent in evaluating their preparedness to ride. The suggested fitting criteria are subject to variances in state law and lack scientifically-based evidence. While some recommendations based upon the riders’ physical capabilities exist , the adoption of these recommendations has not gained attention because they are not comprehensive and lack quantitative and systematic data. Recommendations based on riders’ physical capabilities appear to provide a better foundation to determine if the machine is suitable for the rider . Therefore, there is a need to evaluate youth-ATV fit based on the riders’ physical capabilities . Since 95 % of all ATV-related fatalities involving youth between 1985 and 2009 included agricultural ATVs , the purpose of this study is to evaluate the mismatches between the operational requirements of utility ATVs and the anthropometric characteristics of youth. It has been hypothesized that youth are mainly involved in ATV incidents because they ride vehicles unfit for them. This study evaluated ergonomic inconsistencies between youth’s anthropometric measures and utility ATVs’ operational requirements. The ability of youth to safely operate ATVs was evaluated through computer simulations that comprised 11 fit criteria and male-and-female youth of varying ages and height percentiles operating 17 utility ATV models.Youth-ATV fit was analyzed through virtual simulations and was carried out in five steps. First, 11 guidelines were identified for the fit of youth and ATVs. The second step consisted of identifying a database containing anthropometric measures of youth of various ages , genders , and height percentiles . The third step consisted of collecting the dimensions of 17 ATV models to create a three dimensional representation of them. The fourth step consisted of using SAMMIE CAD and Matlab to evaluate if the youth’s anthropometric measures conform to the guidelines identified in step one. Lastly, hydroponic trays the results of the virtual simulations were validated in field tests with actual riders and ATVs.The fit criteria provide movement-restraint thresholds that check if the rider can safely reach all controls and perform active riding, which requires the operator to shift their center of gravity to maintain the vehicle’s stability, especially when turning or traveling on slopes . Maintaining a correct posture is essential because, otherwise, the rider’s ability to control the vehicle is compromised, which puts them and potential bystanders at risk. The reach criteria considered in this study were selected based on the recommendations of the following institutions: National 4-H Council , U.S. Consumer Product Safety Commission , Intermountain Primary Children’s Hospital , and Farm and Ranch eXtension in Safety and Health Community of Practice . Disregarding overlaps, these guidelines consisted of 11 anthropometric measures of fit, which are presented in Table 1.Human mockups were developed in SAMMIE CAD. This computer program allows users to create customized virtual humans based on eight anthropometric dimensions, as shown in Fig. 1a. In total, 54 youth mockups were created, a combination of two genders, nine ages , and three body size percentiles in height . The age range was selected because most youth start operating farm machinery at 8 years old , and most ATV-related crashes occur with riders younger than 16 years old . Two adult mockups were also created to establish a baseline for comparisons.

The anthropometric measures used as input to SAMMIE CAD were retrieved from the database of Snyder et al. , which includes measurements from 3,900 subjects from 2 to 18 years of age for both genders. The adopted anthropometric measures were based on the mean values of groups of subjects with the same age, gender, and height. One of the required inputs was not available in the database used for this study. Therefore, the missing input was computed using the available data. The seated shoulder height was calculated by subtracting the head and neck length from the seated height .In total, 17 utility ATV models were evaluated. Selected models consisted of vehicles of varying engine sizes from the most common ATV manufacturers on U.S. farms . General descriptive variables such as manufacturer, model, series, engine capacity , drive terrain , transmission, and suspension type were recorded. ATV mockups were developed based on the spatial coordinates of selected ATV features . An original attempt to record spatial coordinates of ATV features consisted of using Photogrammetry, a technique in which several pictures of an object are taken from various angles and then processed to create a 3-D model. Nevertheless, this technique proved inefficient, as initial trials were time-consuming, and the results had unsatisfactory accuracy. A second attempt consisted of using a virtual reality tracking system. This alternative proved fast to implement with excellent accuracy ; hence, this technique was selected and presented in the following section.The VR tracking system utilized in this experiment consisted of two controllers and two infrared laser emitter units . The system allows the user to move in 3-D space and use motion-tracked handheld controllers to interact with the environment. The system uses the lighthouses to shoot horizontal and vertical infrared laser sweeps that are detected by photodiodes positioned in the surrounding of the controller’s surface . The position and orientation of the controllers are calculated by the difference in time at which each photodiode is hit by the laser . By placing the controller over selected vertices of ATV features, it was possible to record their spatial coordinates, which allowed the development of the 3-D ATV mockups. A custom program was developed to calibrate the system, log, and manipulate data. This program was initially retrieved from Kreylos and then modified to meet the specific needs of the present study. Examples of these functionalities are a 3-D grid, which allows for real-time visualization of labeled points, and a measuring tool . A probe was custom-manufactured and attached to the controllers to ease the calibration process and data collection. The probe was made of metal and had a rounded tip, which made it wear-resistant and prevented it from damaging the ATVs. The measurements were collected inside a tent covered by a white rooftop that reduces the interference of solar rays in the communication between the lighthouses and the photodiodes in the controllers. In total, 38 points were collected per ATV. The points were selected aiming to get an efficient representation of all selected ATV controls and additional features that were used to assist the virtual simulations, such as the seat and the footrests. After data filtering, the data were processed in SAMMIE CAD for a 3-D representation of the evaluated vehicle, as shown in Fig. 2.ATV-rider fit was evaluated through SAMMIE CAD and Matlab. Fit criteria 4, 5, 6, 7, 8, 9, and 10 were evaluated in SAMMIE CAD because their assessment involved complex interactions between riders and ATVs, such as measuring the angle of the rider’s knee while riding.

Cannabis testing regulation is strict compared to tobacco, another inhalable crop

Using the R software and the zip codes of active licensed testing labs and distributors, we generate a map that shows the geographical location of licensed labs and distributors in mid-2019 . We have no information about which labs served which distributors. However, we expect that labs are better able to compete for nearby distributors because they would have lower transportation time and cost and may be more likely to have closer business relationships. In order to estimate average transport costs from distributors to labs, we randomly assigned distributors located within a 160 mi radius to each lab. Based on 2019 data, this was the longest travel distance from a distributor to the nearest lab. This travel distance radius ensures that each distributor in the sample is covered by at least one laboratory. Based on the annual number of samples that we estimate each lab is able to test , we estimate the share of total testing done by small labs, medium labs, and large labs. We then estimate the number of distributors per lab. In each of our 1,000 simulations, 70% of the 49 licensees with specific locations were randomly chosen to represent small-scale labs, 20% were randomly chosen to represent medium-scale labs, and 10% were randomly chosen to represent large-scale labs.The minimum capital investment in testing equipment needed to satisfy regulations is substantial. We estimate that in small labs , capital investment in equipment is about $1.1 million; in the medium-sized labs , capital investment in equipment is about $1.8 million; and in large-scale labs , capital investment in equipment is about $2.8 million. These capital costs, amortized over a 10-year time span with a 7.5% rate of depreciation and interest, vertical growing system represent less than 15% of total annual expenses. Annual costs of operating range from $1.4 to $2.2 million for small labs, $2.7 to $3.7 million for medium-sized labs, and $6.2 to $8.1 million for large labs. Consumables are the largest share of total annual costs in large-scale labs, whereas labor is the largest share of costs in small-scale labs. In medium-scale labs, consumables and labor have about equal shares of annual costs. Different-sized labs differ in their capacity and efficiency.

Large-scale labs test about four times the amount of cannabis per hour than medium labs, and more than 10 times what small labs test. The cost advantage of large testing labs comes from a more efficient use of inputs such as lab space, equipment, and labor. Table 6 summarizes the average of estimated testing capacities, annual costs, and testing cost per sample for each of the three lab size categories. Cost of collection, handling and transport also vary by lab size. As of April 2018, the longest distance between a lab and a distributor in California was about 156 miles. Fig 3 shows the cost of collection, handling, and transportation per sample for distances between labs and distributors, of less than 156 miles. As expected, the longer the distance, the higher the sampling cost. Large labs have relatively low sampling costs even at long distances. The highest possible sampling cost we assume for small labs is about $35 per sample if the distributor is located 156 miles away . On average, costs of collection, handling, and transportation represents a small share of total lab costs per sample. Fig 4 shows the distribution of full testing cost per sample from 1,000 Monte Carlo simulations assuming 49 labs. Variability of the cost per sample within small labs is high, with the highest and lowest cost within that group differing by $463. The difference between the highest and lowest costs in large labs is $88, with a lowest cost per sample of about $273. The average full cost per sample tested is about $313 for large labs, $537 for medium labs, and about $778 for small labs . Large cost differences per test and per batch document the large-scale economies and differences in operational efficiencies across labs of difference sizes. The aggregate amount of cannabis flowing through licensed labs in 2019 remains relatively small relative to the anticipated amounts expected in the future. That means labs that may anticipate growth, operate well below capacity.

Substantial scale economies suggest that, as the market settles, the smallest labs must either expand to use their capital investment more fully, leave the industry, or provide some specialized services to distributors that are not accounted for in the analysis presented here. Simply put, the average cost differences shown in Table 6 or the simulated ranges displayed in Fig 4 should not be understood as a long run equilibrium in the cannabis testing laboratory industry.In 2018, the first year of mandatory testing enforcement, according to official data published by the California Bureau of Cannabis Control and posted publicly on its website, failure rates in California averaged about 5.6% . Failure rates for the first seven months of 2019, the second year of the testing regime, have averaged 4.1% . We assume a 4% failure rate for the current market in California. By comparison, in Washington State, in 2017, the second year after the testing began, 8% of the total samples failed one or more tests. The Colorado Marijuana Enforcement Division reported that during the first six months of 2018, 8.9% of batches of adult-use cannabis failed testing, with infused edibles and microbial tests for flower accounting for the most failures. Batch size significantly affects the per-pound testing cost of cannabis marketed, especially when batch size is smaller than 10 pounds. Fig 5 shows the costs of one pound of cannabis marketed coming from different sizes of batch flowers using 0%, 4%, and 8% rejection rates. As rejection rates increase, the differences between the costs per pound of testing different batch sizes decreases. For example, given a 0% rejection rate, the cost of testing per pound of cannabis marketed from a one-pound batch is about 27 times higher than the cost of a 48-pound batch; on the other hand, given an 8% rejection rate, the cost of testing per pound of cannabis marketed from a one-pound batch size is only seven times higher than the cost from a 48-pound batch size.In this paper, we use a simulation model to estimate the costs per pound of mandatory cannabis testing in California.

To do this, we make assumptions about the cost structure and estimated the testing capabilities of labs in three different size categories, based on information collected from market participants across the supply chain. For each lab, we estimate testing cost per sample and its share, based on testing capacity, of California’s overall testing supply. We then estimate a weighted average of the cost per sample and translate that value into the cost per pound of cannabis that reaches the market.We use data-based assumptions about expected rejection rates in the first and second round of testing, pre-testing, and the remediation or processing of samples that fail testing. Our simulations rely on information collected from several sources, including direct information from testing labs in California, price quotes from companies that supply testing equipment, interviews with cannabis testing experts, data on testing outcomes for cannabis and other agricultural products from California and other states, data on pesticide detection in California crops, and data on average wholesale cannabis batch sizes. Costs needed to start a testing lab that meets California regulations depend on the scale of the lab. As lab scale rises, testing capacity rises faster than do input costs, so average costs fall with scale. We find that a large lab has four times the total costs of a small lab but 10 times the testing capacity, in part because large labs are able to use their resources more efficiently. Testing cost per pound of cannabis marketed is particularly sensitive to batch size, how to dry cannabis especially for batch sizes under 10 pounds. Testing labs report that batch size varies widely. The maximum batch size allowed in California is 50 pounds, but many batches are smaller than 15 pounds. We assume an eight-pound average batch size in the 2019 California market, but we expect that the average batch size will increase in the future as cultivators become larger and more efficient and take advantage of the opportunity to save on testing costs .Testing itself is costly, but losses inflicted by destroying cannabis that fails testing is a major component of overall costs. Low or zero tolerance levels for pesticide residues are the most demanding requirement, and result in the greatest share of safety compliance testing failures. Cannabis standards are very tight compared to those for food products in California. A significant share of tested samples from California crops have pesticide residues that would be over the tolerance levels established for California cannabis . Some foods that meet pesticide tolerance established by California EPA may be combined with dried cannabis flowers to generate processed cannabis products . Pesticide residues coming from the food inputs may generate detection levels of pesticide over the tolerance levels set by cannabis law and regulation, even if they are otherwise compliant as food products. Tobacco has no pesticide tolerance limits because it is considered to be an inedible crop used for recreational purposes. Cannabis has multiple pathways of intake, such as edibles, inhalable, patches, etc., and also may be prescribed for people with a health condition, searching for alternatives to traditional medicine. Some labs report that when samples barely fail one test, they have a policy of re-testing that sample to reduce the probability of false positives.

Some labs have reported up to 10% in variation in test results from the same sample. Some labs indicate that about 25% of samples need to be re-tested to be sure that results are accurate. Such concerns have been widely reported. In July 2018, some producers voluntarily recalled cannabis products after receiving inconsistent results of contaminant residues from different laboratories; and some California labs have also been sanctioned by the Bureau of Cannabis Control for failing state audits on pesticide residue tests. A major issue for legal, taxed and licensed cannabis market is competition with cannabis marketed through untaxed and unlicensed segment. Higher testing costs translate into higher prices in the licensed segment. Safety regulations and testing may improve the perceived safety and quality of cannabis in the licensed segment, thus adding value for some consumers. However, price-sensitive consumers move to the unlicensed segment when licensed cannabis gets too expensive. A useful avenue for further research is to investigate cannabis testing regulations and standards across states to assess implications for consumer and community well being and competition with unlicensed cannabis. Compared with other agricultural and food industries, the licensed cannabis industry in California has relatively little data. Banking is still done in cash, and sources of government financial data are less available for cannabis than they are for other industries. As the licensed cannabis segment develops, we expect that increased access to data on the market for testing services, including on prices, quantities, and batch sizes. Data from tax authorities, the track and-trace system, and the licensing system will then help clarify the costs and implications of mandatory cannabis testing.Industrial hemp ., has been an agronomically important crop since 2700 BC in China. Today, it serves a purpose in a variety of different industries, such as pharmaceuticals, nutraceuticals, textiles, composite materials, bio-fuels, foods, cosmetics, and hygiene products . Hemp is one of humanity’s earliest domesticated plants going back to the Neolithic times in parts of East Asia . Hemp is the non-psychoactive form of Cannabis, differentiated from marijuana only by having less than 0.3% tetrahydrocannabinol concentration in dry mass . In 1970, the Controlled Substances Act was passed in the United States, which stated that all Cannabis sativa, psychoactive or not, was a Schedule 1 drug with “high abuse potential with no accepted medical use; medications within this schedule may not be prescribed, dispensed, or administered” . The passage of the Controlled Substances Act forbade any individual from researching or growing Cannabis in any form, including hemp, and it was not until forty-eight years later with the passage of the 2018 Farm Bill that researchers and growers could again study and grow hemp. With 48 years of absence from the scientific literature, the renewed interest in hemp as a crop with high agronomic value has stimulated significant research activity.