Food variables were used mainly as continuous variables and were z-score transformed for the analysis

They all but disappeared during the Dark Ages of Medieval Europe and were rediscovered in France in the 1700s. Sir William Gage introduced the gages to England in the 1720s and subsequently both lost the varietal labels and named them after himself. The trees are weak to moderate in vigor and extremely narrow and upright. At their tree-ripe perfection in late July and August, the gages feature a green, yellow, or golden skin and a sugary sweet taste with slight tangy undertones that is arguably the most intensely rich-tasting fruit on the planet. True green gage plums are hard to find but worth the search.This species originated in China 2,000 years ago, was introduced to Japan in the 1600s, and subsequently brought to the U.S. by horticulturists John Kelsey and Luther Burbank.Burbank used this stock to breed the Satsuma, the Santa Rosa plum, and countless other varieties that founded the California plum industry. The fruit is large and heart-shaped to conical. The skin color can range from golden yellow, orange-red, or blood red to purple and black. Flesh color usually reflects a variation on the skin color. The taste is slightly acid over sweet. They are best eaten fresh. The flesh is juicy and unlike European plums they are not freestone, two notable exceptions being Satsuma and its improvement, Mariposa. These two varieties also feature less acidity and thus can be dried, a la prune plums. Japanese plums bloom abundantly early in the season , and thus fruit earlier than European plums . They generally produce heavy crops; if even 1–2% of the blooms set fruit, thinning is required. They tolerate milder winters, that is to say they bloom and set fruit with less chill hours than European plums. The trees tend to be vigorous, rambunctious growers, often exceeding 10 feet a year on standard rootstocks. They are very upright growers with the exception of the Satsuma and Mariposa varieties, trimming cannabis which again exhibit a prune plum-like growth habit. Their pollination needs are similar to European plums.Domestica plums should be pruned hard to stimulate continued vegetative growth throughout their life. As with peaches, when a plum branch goes flat it weakens and produces smaller and smaller fruit.

Prune to an inward or upward facing bud to redirect flat growth upward. Japanese plums should rarely be stimulated via heading cuts once established. Heading causes multiple narrow-angled , excessively vigorous regrowth. Pruning at maturation devolves to the occasional thinning cut and the renewal of the brushy lateral fruit bearing growth. Japanese flower buds have a cluster of 3–5 blossoms that live for 3–5 years. In any given pruning session 20% of these laterals should be stubbed back to 1–3 buds and regrown. They will fruit in the second year after renewal. Thinning for Japanese and European plums should be one to a cluster every 4–6 inches. Oversetting results in a nutrient sink that inhibits bloom and fruiting the next year . As with peaches they can and probably should be rescaffolded periodically .The current way we produce and consume food threatens both human health and environmental sustainability. In 2019, the EAT-Lancet Commission launched the planetary health diet, a global reference diet with focus on healthy diet produced in a sustainable way. The main objective of this diet is to increase the consumption of plant-based foods including vegetables, fruits, legumes, whole grains, and nuts while reducing the consumption of animal-sourced foods such as red and processed meat and dairy products. Another rapidly expanding area of research is understanding the complex relationships between diet, gut microbiome, and human health. Human gut microbiota refers to a complex community of trillions of different micro-organismsresiding in the human gut. Diet is considered one of the most important factors influencing composition and function of the gut microbiome, and thus, determining its metabolic outputs that may play a role in human health and disease. Consequently, many diet-associated conditions have been associated with the gut microbiome including obesity and several chronic diseases such as type 2 diabetes and cardiovascular diseases. Thus, it is important to examine the specific roles of different food groups on the gut microbiome.

The associations of plant-based foods, red and processed meat and dairy consumption with gut microbiome have not been extensively examined. Controlled small-scale human trials conducted mainly on individuals with obesity have demonstrated shifts in microbiome diversity or composition and adverse changes in microbial metabolites on diets high in animal-based foods and low in carbohydrates during 5-days to 8-weeks. Larger-scale observational studies on healthy French adults and on Chinese middle-aged and elderly have reported inconsistent results on the associations between individual plant-based foods, red meat or dairy with gut microbiome. These studies, however, lacked data on actual consumption of the foods due to the utilized dietary assessment method and on some foods in the core of this current study such as dairy subgroups . These apply also to our previous study, consisting partly of the same study population, where we also used frequency-based FPQ to examine diet quality-microbiome links . Furthermore, another Chinese study on middle-aged and elderly individuals utilized a more detailed dietary assessment method but focused on associations of vegetables or fruits with gut microbiome. To address these limitations, we used dietary recalls which capture wider range of foods and provide detailed dietary data on the quantitative consumption of these foods allowing for a more comprehensive assessment of the associations of plant-based foods, red meat, or dairy with gut microbiome. Furthermore, in contrast to our previous study where we examined genus-level microbiome associations we now examined species-level associations. The specific aims of the current study were to examine whether the consumption of plant-based foods , red and processed meat or dairy is related to individual gut microbiome diversity , inter-individual differences in gut microbiome composition , and differences in relative abundances of bacterial species in Finnish adults. We also examined how the functional properties of the microbiome relate to these food groups.We used data from the National FINDIET 2002 Study, a sub-study of the National FINRISK 2002 Study. The FINRISK Studies have been conducted by the Finnish Institute for Health and Welfare every five years from 1972 until 2012 to monitor risk factors for non-communicable diseases in Finnish adults. FINRISK 2002 comprised of a self-administered health questionnaire and a health examination, involving a random sample from six large geographical areas in Finland drawn from the national population information system . Stool shallow shotgun sequencing was successfully performed for a total of 7231 participants of which additional 20 participants were excluded due low read counts . One third of the FINRISK participants belonged to the FINDIET 2002 subsample where dietary habits of the participants were assessed by a 48-hour dietary recall. Of those invited, 2045 completed the recall and 2007 of the recalls were accepted. After excluding pregnant women and those who had a registered purchase of antibacterial medications for systemic use code: J01) within six months prior to the baseline examination , the final data included 1273 participants with available stool samples and dietary recalls.Food consumption was assessed with a 48-hour dietary recall. Dietary recalls were conducted during the health examination by trained nutritionists who interviewed participants and recorded all foods and beverages consumed. Portion sizes were estimated using commonly used food packaging, gardening rack household measures and a validated portion size picture booklet. The mean daily energy intake and consumption of food groups were assessed using the in-house calculation software Finessi and the Finnish national food composition database maintained by the THL.

Food consumption was calculated at the ingredient level by decomposing mixed dishes into individual ingredients using standard recipes. The main food groups and their subgroups used in the study are presented in Table 1. As we were unable to analyze nuts and seeds separately due to their very low consumption, we included them within the vegetables subgroup and for the same reason we also kept legumes within the vegetables. Similarly due its low consumption, we included ice cream within the other dairy products subgroup. The food variables were categorized based on the study specific consumption quartiles for principal coordinates analysis and distance-based redundancy analysis .All who participated in the health examination of FINRISK 2002 were asked to donate a stool sample. Those willing were given a stool sampling kit and instructions during the health examination to promptly gather the sample at home at their earliest convenience. Participants collected the samples into 50 ml Falcon tubes without a stabilizing solution and then sent them overnight under Finnish winter conditions to the study personnel preferably on Monday, Tuesday, Wednesday, or Thursday, to ensure optimal preservation of the sample. The samples were immediately stored at −20 °C and were kept unthawed until sequencing in 2017. The samples were sequenced based on whole-genome, untargeted shallow shotgun sequencing at the University of California San Diego. Normalizing of the samples to 5-ng inputs were done using an Echo 550 acoustic liquid handling robot and the samples were sequenced using Illumina Hi-Seq 4000 for paired-end 150-bp reads. The average read count was approximately 900,000 reads per sample. A more detailed description of protocols for DNA extraction and library preparation can be found elsewhere. Quality trimming of the sequences and removal of sequencing adapters was performed using Atropos. After removing human DNA reads by mapping them against the reference genome assembly GRCh38 using Bowtie2, the raw sequences were taxonomically annotated using SHallow shOtGUN profiler v1.0.5 by comparing them against complete archaeal, bacterial, and viral genomes in NCBI Reference Sequence Database v82 , U.S. National Library of Medicine, Bethesda, MD, USA; May 8, 2017). The classified microbial data were used in a compositional form, meaning their relative abundances were calculated by scaling their raw counts to the total sum of reads. For taxa analyses, the data were filtered to bacterial taxa and down to a core microbiome including any bacterial species with a minimum abundance of 0.01% and a prevalence of at least 1% across all samples, similar to Salosensaari et al..Trained nurses at the study site measured weight and height using standardized international protocols with participants wearing light clothing and no shoes . Height was measured to the nearest 0.1 cm using a wall attached stadiometer and weight to the nearest 0.1 kg using a beam balance scale. BMI was calculated as kg/ m². Participants’ age was calculated based on the birth date and study date, and sex was self-reported. The self-administered questionnaire included questions on participant´s smoking history and current smoking habits. For the analysis two groups were formed: current smokers and nonsmokers who had not smoked in the last 6 months. Information on medicines which could potentially affect the microbiome in addition to the excluded systemic antimicrobial medicines was acquired from the prescription medicine purchase register maintained by the Social Insurance Institution of Finland. Participants were linked to the register through the unique personal identifier assigned to each Finnish citizen. In contrast to assessing the use of systemic antimicrobial medication, an individual was flagged as using these other drugs if he/she had at least 3 separate purchase events including a purchase within 4 months prior to baseline investigation.The analyses were conducted jointly for men and women because the results in general were similar by sex. Characteristics of the study participants are reported as means with their standard deviations for continuous variables and as percentages for categorical variables. Alpha-diversity refers to intra-individual diversity of the microbiome and it was measured using the Shannon index. The associations between alpha-diversity and the main food groups and their subgroups were assessed using linear regression analysis. Beta-diversity refers to inter-individual diversity of the gut microbiome and thus acts as a measure of compositional difference. It was measured using the Bray-Curtis dissimilarity score. Permutational multivariate analysis of variance was used to assess the amount of compositional variation in microbiomes between individuals were explained by main food groups and their subgroups. Principal coordinates analysis was used to assess and visualize clustering of microbiomes in the highest and the lowest consumption quartile of each main food group. PCoA was paired with the function “factorfit” from the vegan package to test whether the averages of the PCoA ordination scores of the highest and the lowest consumption quartiles of the main food groups differ significantly .

It is also widely used agriculturally in other parts of the world such as the UK and the USA

The asymmetry of information detected in this study might result in deterrence for some producers to participate in the program if they perceive that producers that do not share PRRS status have competitive advantages over those that do share information. This study used data from a 24-month period to evaluate a disease that has been present in the US for more than 30 years. The decreasing PRRS trend shown here is consistent with a similar trend reported at a national level, however, clearly, our discussion must consider that any short-term trend can be affected by some random shock and may not reflect the true long-term trend of the disease. For example, the decrease of PRRS incidence may be explained, at least inpart, by the emergence of PED in 2013, which had a major impact on the US swine industry. Because producers and veterinarians became more worried about certain practices that might facilitate PED spread, increased bio-security might also have helped PRRS control. Another interesting finding was the negative correlation between the probability of sharing PRRS status and occurrence of disease . This result may be explained, at least in part, by the expectation that negatives premises may have been more willing to share PRRS status than positive premises, because of the differential in the perceived economic consequences of sharing that infected and non-infected premises have. However, an alternative explanation for this finding is that producers may have recognized the value of increasing their level of PRRS-related information collected from neighbors and trading-partners that may have helped them to select their suppliers of inputs. Some have suggested that information obtained via the production chain can achieve desirable outputs more efficiently than by using laboratory analyses to detect system failure, arguing, for example, that efforts to reduce information asymmetries and ensure product quality has led to vertical integration and extended production contracts in animal food systems. Vertical integration allows perfect information throughout the production chains, drying weed which may be more efficient than laboratory tests to identify disease prevalence, which then must be translated into control efforts.

Similarly, certain attributes of the voluntary cooperation in RCPs, in terms of accessibility to data and information among producers located in a specific area, may resemble those observed in vertical systems, which may be useful and complementary to regular surveillance and strategy selection to control PRRS. Results in this study also reveal that the higher the density of medium and large premises at the county level, the higher the probability of occurrence of PRRS for farms in those counties. This result suggests that disease spread is positively related to the density of production premises and/or the number of swine. Consequently, one may hypothesize that given that larger premises have higher odds of being infected, they could act as sources of infection for secondary cases, given the larger susceptible population and associated management factors. The final model included premises nested into counties, which indicates certain heterogeneity among counties in terms of the relative importance of the variables assessed here. Indeed, and despite the declining incidence of PRRS over the study time, spatial, temporal and spatial-temporal aggregations were detected through the study period . Additionally, results from this study are consistent with a combination of direct and indirect mechanisms of spread, as suggested by the persistence of spatial aggregation in some areas, but with extensions into others regions that may result as a consequence of between premises movements. In conclusion, this study has established a systematic approach to quantify the effect of RCPs on PRRS control. There is evidence that RCP-N212 has attracted a growing proportion of producers to share disease status information, suggesting a rising awareness that sharing information can lead to more effective disease control. By evaluating the effect of participation on the occurrence of PRRS, the value of sharing information among producers may be demonstrated, in turn justifying the existence of RCPs. These results provide useful indicators regarding the evolution of the RCP-N212, and, ultimately, support for disease control in Minnesota. Furthermore, the methods presented here may be applied to measure progress in other RCPs.farming offers a sustainable solution to scarcity of fresh produce.

The concept of indoor farming has been explored for millennia. The concept of shielding plants from vacillating weather conditions by growing plants inside a greenhouse was first implemented by agrarian communities in 30 CE. As time and technology progressed, full control over ventilation, air flow, growth medium, and light exposure became feasible. One of the first fully-fledged controlled environment research facilities began operation at North Carolina State University in 1968. Recent developments in the semiconductor industry have made it cost effective for light-emitting diodes , which can provide the specific wavelengths of light for photosynthesis, to supplant broadspectrum sunlight. This has given rise to “plant factories”and indoor farming “pods” : warehouses and shipping containers outfitted with LEDs, hydroponics, cameras and advanced sensors which are nominally more efficient than traditional farms and greenhouses.Shade is not a concern for indoor farms as, with optimal optical design, all plants can receive the requisite light for photosynthesis. As such, many indoor farms will organize hydroponically grown plants in either 1) vertically-stacked shelves or 2) adjacent panels hanging from the ceiling. This dense packing of plants facilitates more growth on less land. To put this into perspective, a 30-story vertical indoor farm with a 5-acre base could produce a crop yield equivalent to 2400 acres of a traditional farm.Different plants require different wavelengths of light for optimal photosynthesis and optimal growth of features such as stem length and leaf thickness. LEDs are perfectly suited to supply plants with the specific, optimal combination of colors of light they need, because LEDs emit a narrow band of wavelengths depending on the bandgap of their constituent semiconductor material. Thus, indoor pod farms with “walls” of LED light strips reduce energy waste by maximizing the amount of power absorbed by the plants and minimizing the power lost to excess heat. Beams of light supplied by LEDs can be collimated by addition of optical lenses, further reducing energy lost to non-plant targets and reducing the distance between the plants and LEDs. Additionally, exponential development in the semiconductor industry over the past three decades has made LEDs smaller, faster-actuating, more efficient, and more durable than traditional incandescent light sources, rendering LEDs economically viable for indoor farming applications.Hydroponics is a method of growing plants in a nutrient rich solution without the need for soil. Depending on the type of crop, this method can be executed via drip irrigation, aeroponics, nutrient film technique, ebb and flow, aquaponics, or deep-water culture. Although physically very different in the method of delivery, most of these techniques share the same fundamentals: a nutrient solution is pumped to the plants via a specialized delivery system and then circulated back to a reservoir where the nutrients are replenished. We refer the reader to for a comprehensive description of these techniques. Hydroponic methods use, on average, 10% of the water utilized in traditional farming as nutrients are delivered directly to the plant roots, minimizing water lost to evaporation. This mode of growing plants can be easily automated and, vertical growing systems combined with the fact that the lack of soil protects against pests, these systems make it easier to cater to the unique physiological needs of the plants while eliminating the need for pesticides and other chemicals. These systems, however, have high start-up costs and thus present a dire need for high operational efficiency to recoup these costs.The practice of vertical indoor farming in shipping container “pods,” enabled by LED light sources and hydroponic nutrient sources, is still nascent and little work has been done to quickly and efficiently model and optimize such systems. A digital-twin of an indoor pod farm can be safely and cheaply manipulated without jeopardizing the system or the plants’ well-being, making it an exceedingly quick, inexpensive, and useful approach for identifying optimal operational parameters.

The indoor farming pod is a complex system with a multitude of physical phenomena including air flow, light propagation, and energy transfer. Several digital-twin frameworks have been developed to capture the physics of light propagation in greenhouse, agrophotovoltaic, and food decontamination applications using ray tracing techniques and to capture and optimize the physics of energy flow and air flow. Ray tracing techniques decompose light into rays whose interactions with surfaces are quickly geometrically traced, facilitating fast computation of a large number of interactions between rays and surfaces and optimization of the surface shape for maximum absorption/reflection. Digital-twins have been scarcely employed in optimizing agricultural systems, and they are even more rarely implemented in indoor farming pods. Two such implementations were carried out by Randolph et al. and Sambor et al. to optimize the energy consumption of an off grid indoor farming pod to determine optimal operation time for each component of the system. These implementations, however, do not allow for manipulation of the orientation and/or shape of the system’s components for maximum operational efficiency. In [29], computational fluid dynamics methods were utilized to model the air flow inside an indoor farming pod, but such methods have a prohibitively high computational cost, especially when running various configurations and performing optimization. Thus, an easily manipulated, computationally inexpensive model that accurately captures the system’s physics is desired.Indoor farming is a promising mode of next-generation agriculture offering numerous benefits such as year-round crop cultivation, reduced transportation costs, and enablement of urban farms. However, these systems still face challenges related to energy consumption, and there has been limited quantitative analysis of their overall efficiency. To fill this gap and promote innovative design, we introduce a cost-effective digital-twin to analyze the optical properties of an indoor farming pod using a ray-tracing model. We utilize a genomic optimization scheme to identify the most optimal LED geometric configurations and emission characteristics toward maximizing energy absorbed by the constituent plants. The proposed digital-twin and optimization framework serves as a foundational framework that takes a physics-driven approach to optimize energy flow and paves the way for more sustainable indoor farming practices. To adapt the framework to other indoor farming configurations, we can adjust design objectives via cost function design and incorporate constraints via parameter search bounds. The framework could also be extended to include models for water usage or crop-specific reactions to different chemical/pesticides, thereby enhancing the accuracy of the digital-twin. Extending the framework to include wavelength-specific power flow could further improve predictions of energy efficiency and crop yield by providing each plant with its ideal lighting conditions. Such refined models can serve as a valuable tools for testing and estimating how a particular design would perform in the real world, enabling farmers to make informed decisions and effectively optimize their own indoor farming setups.The growth of the dairy industry has been accompanied by an increased volume of waste emissions that mainly consist of fecal and farm matrices. Manure contains a large number of undigested organic nutrients such as sugars, amino acids, nucleic acids, and vitamins. It is thus a valuable source of organic matter, nitrogen, phosphorus, potassium, and some micronutrients. Animal manure has therefore been used on farms as one of the most important and valuable sources of nutrients to improve soil fertility and increase agricultural crop production. Some farms recycle the solids in manure to use as bedding material, which can have advantages for farmers in terms of availability, convenience, and cost effectiveness. Researches have demonstrated that the use of organic manure, whether it is used alone or in combination with inorganic fertilizers, can have positive effects on crop yield and can improve the soil quality. In China, manure has been used in many agricultural regions of the country for centuries. The application of manure to agricultural land is an environmentally friendly method of waste disposal. However, in addition to organic matter, manure also contains many harmful gases, heavy metals, parasite eggs, antibiotic resistance genes, and a variety of intestinal microflora and opportunistic pathogens, as well as antimicrobial resistant bacteria. Pathogenic and antimicrobial-resistant microorganisms contained in the manure can lead to the contamination of edible agricultural products. Thus,if these manures are used as fertilizer without treatment or are not treated properly, dangerous microorganisms could be transferred from animals to humans, bringing about a threat to the environment and to human health. In addition, bacterial contamination of dairy farm environments can cause disease or spoilage of milk and its secondary products.

The most resistance to the same drugs was noticed in FH door swabs and worker’s outwear swab isolates

Free range environment in poultry farms, natural diet, and strict bio-security measures might cause the lower prevalence of Salmonella in organic poultry farms. However, the present study’s absence of Salmonella in farm environments might be caused by study design, as including some variables might play a crucial role and may result in different outcomes. Additionally good farm management, bio-security measures and external factors such as proximity to other farms might be potential confounders of not recovering Salmonella in the present study.Generic E. coli were prevalent in 50 out of 70 collected samples, being detected across all sample types except LH cage and egg swab samples. The population of generic E. coli and aerobic bacteria counts are shown in Table 2.1. Both E. coli and aerobic bacteria counts were lower in outwear and boots swabs compared to LH and FH fecal samples . However, both E. coli and aerobic bacteria counts for the FH door swab were not significantly different compared to outwear and boots swab counts . Generic E. coli counts were higher in the fecal samples collected from LH and FH compared to FH door swab samples . Similarly, aerobic bacteria counts were higher in fecal samples from both LH and FH followed by FH door swab samples and LH cage swab while egg swab samples had the lowest counts . Previous studies on E. coli prevalence from poultry farm environmental samples and worker’s samples varied . As mentioned before, the diverse prevalence of recovery of E. coli from farm environments depends on factors such as geographical location, heavy duty propagation trays scale of farms, antimicrobial usage and cleaning and sanitation practices on farms . Counts of E. coli in fecal samples in the present study were similar to the findings of previous studies .

Prior studies reported contaminations of egg surfaces with E. coli . In the present study, LH was cleaned frequently , which might be the reason for the lower bacterial load in LH and possibly caused not discovering E. coli in cage and egg swabs. Isolates from the FH door swab were resistant to 14 drugs, FH fecal sample isolates were resistant to 8 drugs, LH fecal sample isolates were resistant to 5 drugs, outwear swab isolates were resistantto 4 drugs, and of boots swab isolates were resistant to 2 drugs . The E. coli isolates from all types of samples were susceptible to amikacin, piperacillin / tazobactam constant 4, ticarcillin / clavulanic acid constant 2, ceftazidime, gatifloxacin, aztreonam, ciprofloxacin, imipenem, and piperacillin. The highest phenotypic resistance was observed for ampicillin , followed by nitrofurantoin and cefoxitin . A full antibiogram pattern of the antimicrobial susceptibility testing is presented in Table 2. Isolates of E. coli from FH door swabs and worker’s outwear swabs were resistant to the same drugs, such as ampicillin, nitrofurantoin, cefuroxime and trimethoprim/sulfamethoxazole . Isolates from all the environmental sample types and one isolate of boot swab samples were resistant to cefoxitin. All the environmental and worker’s sample types were resistant to ampicillin. Generic E. coli isolates from the FH door swab had a higher prevalence of resistance to at least one drug compared to isolates from the boot swabs . However, there was no difference in the occurence of resistance to at least one drug between the isolates from FH feces , LH feces and outwear swabs. Thirty five percent of all the tested generic E. coli isolates were resistant to at least one drug, nine percent to two drugs, and six percent of isolates were resistant to three or more antimicrobial drugs. Most prevalent non multi-drug resistant pattern was AMP-NIT . Four isolates from FH door swab samples were MDR, one isolate from LH fecal samples, and one outwear sample was MDR .

The present study observed the highest resistance in E. coli isolates for ampicillin and nitrofurantoin . However, these levels of resistance rates are not considered high resistance rate since, in the medical community the, resistance rates 20-30% above are considered as the highest level of resistance in bacteria that raise concern for public health. These antimicrobials are essential for veterinary and human medicine . The resistance prevalence to ampicillin was lower compared to other related previous studies. In most of these studies, the high resistance of ampicillin in E. coli was due to increased use of this prescription drug for treatment and ampicillin is a commonly prescribed antimicrobial to treat a wide range of infections worldwide . Besides, factors such as the horizontal transfer of resistance genes from other bacteria species to E.coli and the production of enzymes degrade or modify ampicillin might be possible reasons for the increased resistance to ampicillin in E. coli . Resistance rates of E. coli for nitrofurantoin in the present study were consistent with findings of previous occupational expose related studies in poultry farming . Nitrofurantoin is not widely used as ampicillin, only to treat urinary tract infections. Therefore, the low prevalence of antimicrobial resistance in E. coli in the present study and previous studies is an expected outcome. Our results showed a high prevalence of AMR in E. coli isolates from FH door swabs and fecal samples, suggesting that the FH environment might be a potential ARB or ARG reservoir and routes of exposure. Moreover, isolates from LH fecal samples and FH samples shared similar antimicrobial resistance patterns with worker’s outwear and boots sample isolates. Previous studies on occupational exposure of poultry farm workers to AMR concluded that ARB could be transmitted from the farm environments to workers . These previous studies compared antimicrobial resistance patterns in E. coli from environmental samples to worker’s urine or stool samples and found that resistance patterns were similar. Additionally, some studies found ARB or resistance genes in farm dust or hand-wash water on farms .

A study conducted in Tunisia reported that antimicrobial-resistant Campylobacter was found in 3% of chicken farm workers boot samples, but researchers did not recover the bacteria from worker’s outwear . To our best knowledge, the present study is the first study characterizing antimicrobial resistance patterns of E. coli in a poultry facility environment and comparing these patterns with farm worker’s outwear and footwear. The present study shows the importance of using personal protective equipment in reducing the spread of ARB or ARG from farm environments to workers. Additionally, the high prevalence of antimicrobial-resistant E. coli in door handles could suggest a potential risk of AMR transmission to farm workers. Because workers might touch their face or mouth after touching the door handles, we noticed that workers did not wear gloves, which might expose them to ARB. Therefore, wearing gloves and frequently sanitizing door handles is an important measure to minimize the risk of transmission of ARB or ARG, as bio-security measures have been proven to play a crucial role in minimize the transmission of pathogens in farms . Different sample matrices yield different prevalence outcomes. For example, fecal samples might have higher bacteria prevalence compared to surface swab samples, affecting overall prevalence significantly in a study. Additionally, in the present study, birds and workers did not receive antibiotics before and during the sample collection period, and consequently, overall resistance prevalence was low , and our results support the findings of Tang et al. where authors systematically reviewed 181 studies and found out that restriction of antimicrobials in food-producing animals are associated with reduced ARB in these animals. However, previous research has shown that ARB can still be present and transmitted in facilities with no antimicrobial drug use ; and our study found that the poultry facility might be a reservoir of AMR in which workers may be exposed to ARB. The present study cannot imitate intensive poultry production, vertical cannabis where antibiotics are usually used to prevent, control the disease, and treat birds. However, antibiotic usage in food animals in Western countries, including the U.S., has changed over the years. Implementing the Veterinary Feed Directive in the U.S. restricted certain antibiotics, and medically necessary antibiotics are allowed only with veterinary oversight and prescription. Additionally, there might be differences in population density and management practices between small-scale and intensive large-scale poultry production. In the U.S., a surveillance program, NARMS, was established in 1996 to monitor antimicrobial resistance in hospitals, retail meats, and food animals . However, currently, there is a lack of systematic surveillance of AMR in farmworkers. Additionally, there is a lack of awareness or education among the public and policymakers about the risks of AMR affecting farmworkers in livestock production . Moreover, a large body of previous and current AMR studies and regulatory efforts mainly focused on food safety from farm to retail and have considered pathogens as a food safety issue .

The above mentioned facts might explain the scarcity of research or data related to occupational exposure of animal farmworkers in the U.S. Therefore, more research is needed to identify possible ARB routes from the farm environment to workers in order to raise awareness of farmworkers and producers to avoid the risk of occupational exposure to AMR.The development of antibiotics in the 20th century was a groundbreaking advancement in medicine and one of the most significant advances in modern science . However, after a few decades, this great achievement has been compromised by the emergence and spread of antimicrobial resistance . AMR is the ability of microorganisms to protect themselves from the effects of antimicrobial agents via different mechanisms such as enzymatic inactivation, alteration of target sites, efflux pumps, reduced intake, horizontal gene transfer and formation of biofilms . Nowadays, AMR is a major global threat to public health . For example, AMR causes 2.6 million infections and 44,000 deaths each year in the U.S., while costing around 20 billion USD in healthcare and 35 billion USD in lost productivity annually . Overuse and misuse of antimicrobials in humand medine and food animal production have been considered as major contributors to the emergence of AMR around the globe. Moreover, the World Health Organization acknowledged the usage of antibiotics in food animal production as one of the leading causes of the development and spread of AMR . Many studies have shown the assosiation between the usage of antibiotics in food animal production and the emergence of Antimicrobial-resistant bacteria . ARB with their genetic determinants can be transmitted from food animal production to humans via various pathways such as direct contact with animals, environmental and air routes, cross-contamination, water and the food chain, and global trade . Among these transmission pathways, food chain is considered as critical. Previous studies documented that among animal food products, meat is a major reservoir of ARB and AMR in enteric bacteria such as Campylobacter, Eschericchia coli and Salmonella is a serious threat . Non-typhoidal Salmonella is a common and widespread pathogen that causes foodborne infections and outbreaks in the U.S. and around the world . AMR in Salmonella, especially multidrug resistance has become a serious health concern. For example, Salmonella resistance to critically important drugs such as extended spectrum cephalosporins, fluoroquinolones, and carbapenems limits treatment options and heightens the risk of morbidity and mortality among the patients to an estimated 40% . California is a highly populous and demographically diverse state in the U.S. with almost 40 million people which constitutes a major consumer market for retail meat products. Hence, research on the prevalence, distribution and AMR patterns of major foodborne pathogens such as Salmonella is integral to ensuring food safety and public health. The perpetual nature of bacterial populations to evolve over time highlights the importance of continuous monitoring of the trends of pathogens circulating in the food supply chain. Previously, we have characterized AMR of Salmonella from retail meat collected in California in 2018 . The aim of the current study was to characterize the AMR profiles of Salmonella isolated from retail meats in California using samples collected by NARMS routine surveillance in 2019. The specific objectives of the study were to assess the prevalence and phenotypic and genotypic AMR in various Salmonella serovars, to identify the resistance patterns of Salmonella, and to assess the correlation between Salmonella AMR phenotypes and genotypes.Fresh retail meat samples were collected twice a month from January to December 2019 as a part of the NARMS Retail Meat Surveillance. A total of 849 samples were purchased from randomly selected grocery stores in Northern and Southern California.

A critical feature of this operation is that evaporation ponds are necessary to ultimately dispose of accumulating salts

These systems consist of perforated drain lines that are buried approximately 6 feet deep and between 250 and 400 feet apart. The water flows into and through the network of tubes to a collection sump where it is pumped to the surface. The amount of water that leaves the root zone is the same amount as enters the drain system; this water travels through pathways of variable length depending on the location where the water enters the saturated zone and the position of the drain line . As a result, water that originates far from the drain line has a considerably longer distance to travel than water that enters the system from directly over the drain. Jury developed a simple mathematical model to calculate chemical concentrations in the drain, which takes into account these variable travel times. This model was used recently to calculate the transition time for drainage water to reach steady state in the western San Joaquin Valley. Steady state is the point at which the concentration of salts in the water in the drain line remains the same over time. For large drain spacings typical of those found in this region, such as 400 feet, the model predicted many decades of transition time before steady state could be reached. Prior to this time, the drainage water concentrations are influenced strongly by pre-existing salinity in the saturated zone. A flushing process occurs over time, rolling benches hydroponics which explains why selenium continues to occur in drainage water even though essentially no selenium has been added in irrigation water applied to the soil surface. All irrigation waters contain some dissolved salts, which become more concentrated as water is removed by the crop via transpiration.

The ratio of drainage volume to irrigation volume is called the leaching fraction, which is approximately equal to the ratio of the irrigation water’s salinity to the drainage water’s salinity in steady state. In the western San Joaquin Valley the concentration of irrigation water is low enough that for typical leaching fractions, the salinity of water leaving the root zone is less than that of the resident groundwater being displaced. In general, more salts are removed in the drainage water than are being applied with irrigation water. As a result, some of the salts stored in groundwater originating from geologic times are mined through the drainage process. One option for disposing of drainage water is to reuse it on salt-tolerant crops. For this type of operation, however, the water percolating below the root zone would be very high in salinity because of the initial high salinity of the applied water and the concentrating effects of water removal by the plant. In this case, the concentration leaving the root zone would be higher than the resident groundwater, so that salts would be stored in groundwater. The combination of irrigating salt sensitive crops with good-quality irrigation water and then irrigating salt-tolerant crops with the resulting drainage water creates a cyclic process in which salts are first extracted from groundwater and subsequently recharged to it. However, since there is a continual input of salts to the valley through imported irrigation water, salts gradually accumulate in the system and eventually must be disposed of in some manner. We simulated the long-term consequences of a farm system that included irrigating salt-sensitive crops with good-quality water , followed by irrigating salt tolerant crops with a blend of drainage and good waters, and eventually using an evaporation pond for ultimate disposal of the salts.

The imposed conditions, typical of normal farm operations, included planting only economic crops and restricting yield reductions to less than 10%.Water leaving the root zone at a location a short distance from the drain reaches the place of removal in a relatively short time, whereas it may take considerably longer for water originating at greater horizontal distances from the drainage line to arrive . As a result, at any given time following the start of an irrigation operation on a field overlying saline groundwater, the drain will contain a mixture of water originating from the irrigation and resident groundwater. Using the Jury method, it is possible to calculate the fraction of water in each category removed by the drain as a function of time. The percentage of water collected in the drainage system that comes from the root zone can be expressed as a function of a term where T = 2Qt/ θS, and Q is deep percolation in acre feet/acre/year, S is drain spacing in feet, θ is the saturated water content of the soil, and t is the time in years . This relationship depends upon the ratio of drain spacing and depth to an impermeable/restricting layer. Note that it can take a number of years before the water collected in the drain line originates entirely from the root zone. The time increases with the drain-line spacing and/or depth to the impermeable layer. Because the Corcoran clay layer — the restricting layer — is very deep compared to the region’s typical drain spacing, our computations were done for the curve where S/D = 1. The Jury et al. model makes a number of simplifying assumptions, including neglecting groundwater movement. However, the effect of moving groundwater, if any, would be to decrease the amount of drainage water in the drain compared to resident storage, delaying the time needed to reach steady state indefinitely. Neglecting its effect is therefore conservative.

Other influences such as the spatial variability of soil properties are site specific; but, if random they would not affect travel paths appreciably. The salinity of drainage waters in the valley typically creates an EC approximately equal to 10 dS/m. We assumed that the drainage water initially collected from the salt-sensitive crop would be at this concentration. Since irrigation water salinity of 10 dS/m is very high for irrigating cotton, our simulation assumed that drainage water and good water would be combined to an average of 5 dS/m. For this analysis, we relied on Letey and Dinar for the relationships between cotton lint yield and the amount of water applied at various irrigation water salinities. Furthermore, the uniformity of irrigation affects the relationship between yield and the amount of applied water . We assumed a Christiansen’s uniformity coefficient equal to 70, which is typical for a furrow irrigation system. We also assumed that the cotton irrigation would produce a lint yield equal to 92% of maximum potential yield. These conditions specified that a total of 3.2 feet of water would be applied, resulting in 1.2 feet of deep percolation . We assumed two irrigation management cases for the salt-sensitive crop area. One produced 8.8 inches per year of drainage water, and the second imposed a high level of irrigation management that resulted in 4.4 inches per year of drainage water. The EC of the water collected in the drainage system was computed each year using the travel time information in figures 1 and 2. The EC of the drainage water from the salt-sensitive crop area would decrease with time because the salinity of the water leaving the root zone wouldbe less than the resident groundwater’s salinity. The EC of the water collected in the drainage system would dictate how much good-quality water should be used to achieve an average of 5 dS/m irrigation water for cotton. Rainfall that did not evaporate during the winter would provide a fraction of the “good quality” water. As the EC of the drainage water decreased, the required amount of good water decreased. The EC of the drainage water from cotton was calculated yearly. In this case it increased with time, and relatively more good water was required to achieve the average 5 dS/m water. The analyses were done for various numbers of reuse cycles on cotton before the drainage water was disposed of in an evaporation pond.We calculated the percentage of the farm that could be retained in salt sensitive crops — such as tomatoes — for up to 100 years, when the drainage volume from the salt-sensitive area is 8.8 inches per year at various numbers of times that it is recycled through cotton . Increasing the number of cycles decreases the fraction of the farm that can be retained in salt-sensitive crops. However, cannabid indoor grow system cycling the drainage water through cotton also decreases the percentage of the farm that must be devoted to evaporation ponds .

We assumed that evaporation from the ponds was 4 feet per year. Imposing good irrigation management that reduces drainage volume to 4.4 inches per year increases the percentage of the farm that can be planted in salt-sensitive crops . When compared with drainage of 8.8 inches per year , decreasing the drainage volume from the salt-sensitive crop area also reduces the percentage of the farm that must be devoted to evaporation ponds . As expected, increasing the number of reuse cycles increases the percentage of the farm that must be devoted to cotton, at 8.8 inches per year of drainage water from the salt-sensitive crop area . If the drainage water from the salt-sensitive crops is reduced, the percentage of the farm devoted to cotton could also be reduced. Cotton is a common crop in the western San Joaquin Valley and is usually irrigated with good-quality water. In our scenarios, drainage water was partially used to irrigate the cotton. Therefore, we computed the difference in the amount of good water thatwould be used for cotton with and without blending in drainage water. The difference between these two numbers is considered the amount of fresh water saved by using the drainage water. On a 1,235-acre farm, the increase in acre-feet of fresh water saved by increasing the number of cycles is partially attributed to the fact that more land was also farmed to cotton . There are two benefits to recycling drainage water through cotton: the percentage of the farm required for evaporation ponds decreases and fresh water is saved. The monetary value of the water saved depends upon whether the returns are simply associated with applying less fresh water or whether the fresh water can be marketed and sold to the urban sector, where its value is greater. If this is possible, cycling drainage water through cotton could produce significant revenue to offset some of the costs associated with total farm operations.The results of this simulation indicate that a system which irrigates salt sensitive crops with good-quality irrigation water and reuses the drainage water to partially supply water for a salt-tolerant crop, with eventual disposal into an evaporation pond, can be physically sustained for centuries in the western San Joaquin Valley. Management to reduce drainage volumes from the salt-sensitive crop area has a high payoff, in that it allows a greater percentage of the farm to be maintained in salt-sensitive crops and a lower proportion to be devoted to evaporation ponds. Although cotton was selected for this analysis, any salt-tolerant crop could be substituted. It is possible to utilize solar evaporator ponds, in which the drainage water is discharged at rates equal to or less than the evaporation rate. We know of no other option for disposing salts on farmland while maintaining high crop productivity on the major part of the farm. Because of selenium in drainage water, evaporation ponds must also be managed to mitigate wildlife hazards. This may require netting the ponds and/or a combination of management and compensation habitat. The costs associated with the mitigation procedures depend on the extent to which they are required. Therefore, although this system is physically sustainable for centuries, its economic sustainability must still be evaluated. In areas where the selenium concentration in drainage water is particularly high, the selenium concentration in the evaporation pond could exceed the level established for classifying water as a toxic waste, greatly increasing costs and perhaps making the system economically unfeasible. The criteria used to classify selenium-tainted water as toxic waste play a critical role in future opportunities to maintain agricultural production in a sustainable manner without out-of-valley disposal of drainage waters. The current soluble threshold limit concentration for classifying selenium as toxic waste is 1.0 mg/L. This STLC was derived by multiplying the California drinking water standard for selenium by an environmental accumulation factor of 100. The California drinking water standard was 10 mg/L in the early 1980s when the hazardous waste limits were adopted.

Irrigation was also a key component of increasing the number of crops per year

During pre-processing, this DN can be transformed, calibrated, and filtered, to understand different biophysical parameters of that space. There are a number of different constraints that must be accounted for when determining soil water content . Depending on how the DL is transformed, we accounted for those constraints. The DN given by sentinel one data can be transformed into alpha, beta, orgamma values. The team explored these different transformations to determine what was best for determining AWD adoption in the MRD. Because of the complexity of surface roughness and the rice crop itself, the team decided on a beta transformation, using beta values to create a wetness index that represents soil-water content . First, the β° transformation takes the DN and transforms it into reflectivity data. We then calibrate β° using a script embedded in the downloadable file via the ESA web interface for the beta calibration . Calibration for beta values mostly uses a constant unique to the acquisition date, time, and incidence angle, so the shift in distribution after calibration was not as significant as it might be for alpha. After calibration, the data was run through a median smoothing filter to remove speckling . These pre-processing steps created a multi-temporal dataset ready for analysis.The test plots in An Giang, Dong Thap, and Ben Tre were used to train our model. We calculated the maximum and minimum values for the VV and VH bands for each soil type, greenhouse tables with the minimum representing a “flooded” field with low reflectivity and maximum representing a “dry” field with high reflectivity . Because the maximum and minimum VV and VH values were not significantly different between soil types, we used the maximum and minimum from An Giang province moisture meters.

Using these values, we excluded VV and VH values below the minimum and above the maximum of those rice paddies across the entire dataset. This process, in essence, acted as a high pass filter to pull out water features, urban features, and other nonpaddy features .SAR data shows promise as a tool to document adoption of water-saving practices. Results of this study support a further research and development to use remotely sensed data for documenting adoption of AWD. Specifically for alluvial soil types, as shown with the parallel patterns between the Wetness Index and in situ moisture meters, this approach shows promising links to real world conditions. The satellite data used, Sentinel-1, had a moderately useful return time for analysis of AWD adoption in the MRD. Each satellite returns at 12-day intervals, meaning that the rate of return for the two satellites combined is every six days. While strict adherence to AWD technology would dictate that adoption means dry down of rice fields every five days, a pattern of dry down was still detectable every sixe to 12 days. Therefore, we believe that farmers are not necessarily utilizing a five day dry down period, but perhaps a longer period of time between irrigations. The SAR data analysis is much more cost effective than other efforts to measure the extent of AWD adoption. This research project conducted 285 household surveys in four delta provinces for approximately the same cost of developing and executing an analysis across the entire rice growing Mekong River Delta. With further development, this cost would continue to decline to near zero, using free data.There are several issues with the data availability and approach discussed in this paper. Estimating AWD adoption using remotely sensed data is a complex andnuanced issue. Not only was data availability an issue, but also the complexity of SAR data itself. We suggest two avenues for correcting data availability and data analysis issues. Further analysis of SAR data with additional time series datasets could yield more accurate results.

The GIS team originally discussed using ASTER data. The rate of return for each satellite pass-over was determined to be insufficient to detect AWD at use. Therefore, this data was not used in analysis of AWD adoption in the MRD. To understand the sinusoidal patterns of each cell in the raster data sets, we suggest a power spectral density test. The average dry down, while a good estimate of an overall likelihood of some degree of AWD adoption, does not necessarily account for the complex periodicity of the SAR data across the delta over time. The change detection approach used for the SAR data analysis illustrates the average change over time for each cell in the tile that covers the MRD. However, a more nuanced approach would be to use a power spectral density analysis to understand the periodicity of the “dry down” signal and the power of that signal. In other words, cells with increased likelihood of exhibiting an oscillating habit , would score higher on the power spectral density analysis. Second, we suggest that researchers repeat the same analysis for all three seasons of rice to increase the accuracy of the power spectral density test. While the approach was able to detect patterns of “dry down” across the delta, conclusions would create a more robust signal analysis with longer sets of continuous data. If each growing season in a year could be analyzed continuously, especially over a series of years, this would enable detection of a more conclusive and reliable AWD signal.The story of Vietnam’s post-war rural landscape is still unfolding. Vietnam spent 25 years mired in conflict to free themselves of colonial ties, only to start a new battle: the “irrigation front” . While American-led nation building efforts in the 1950s included a heavy focus on technology transfer and adoption , it was in the post-war period after reunification in which Vietnam built the bulk of modern irrigation and infrastructure, shifting the landscape from a traditional and flexible adaptive environment to a technologically robust and controlled deltaic system . Beginning in the 1980s, a cascade of physical and social changes rippled across the Vietnamese Mekong River Delta . The canal and sluice gate system tamed the Mekong and the surrounding delta and, combined with adoption of Green Revolution philosophies of input dependent production, allowed Vietnam to emerge as a rice production giant .

Between 1990 and 2010 Vietnam became one of the region’s leading rice producers. The gross domestic product per capita rose from $98 in 1990 to $2,052 in 2014 . Simultaneously, structural reformations in the late 1980s, known as Doi Moi, reorganized how land was distributed to individuals and privilege male ownership and inheritance .Farmers felt the impact of these larger environmental, economic, and social changes at the household level. The majority of farms in Vietnam consist of two or three small plots of land, totaling less than one hectare . Over time, these small plots shifted to solely rice production, instead of vegetables or livestock outside the wet season . Further, rice production is typically a man’s job, leaving women with decreasing options for local agricultural labor jobs and pushing them to migrate to the city . Most farmers in the Mekong River Delta of Vietnam must keep pace with an increasing demand for rice, which now requires three rice crops per year, or what local experts call the triple rice crop. This primarily male burden will be referred to in this chapter as the “triple crop burden”1 – the socioeconomic pressure to produce three rice crops annually. Triple rice cropping is primarily accomplished using improved seeds, fertilizers, and pesticides, which we refer to as conventional intensification in this chapter. In the late 2000s, the Ministry of Agriculture and Rural Development began pushing a “sustainable intensification” approach, to keep steady the triple ricecrop while making production practices more environmentally friendly . Research on the gendered implications of this policy is quite new, and consists primarily of white papers and project reports. There are three gaps in the literature on how gender influences farming practice adoption in Vietnam. First, because of Vietnam’s recent success in rice production using CI methods, and the subsequent policies to reduce environmental impacts through SI practices, there are few studies outlining the current suite of CI and SI practices used by MRD farmers. Second, flood tray despite the net increase in rice farming over all other crops in the delta, there is a dearth of studies examining how variance in household livelihoods impacts gendered CI and SI practice adoption in Vietnam. Third, there is a lack of understanding in how the country’s increasingly female migrant population, who sends remittances back to the farm, influences adoption of CI versus SI methods. The purpose of this research is to evaluate these three facets of the SI movement in Vietnam. We conducted gender disaggregated plot-level study in Tien Giang Province, in the heart of the triple cropping delta, using household surveys to explore gendered differences in on-farm practices. We evaluated how farmers differentially adopt SI and CI methods, how household characteristics influence male and female adoption of these practices, and to what extent remittances influence men and women’s plots. This research adopts an agroecology framework to understand gendered livelihood strategies on the farm. The discipline of agroecology analyzes the sustainability and robustness of food systems. Through altering management practices and on-farm goals, agroecology is a distinct form of agricultural research both philosophically and materially. First, agroecology includes “facilitating” or “managing” processes rather than “controlling” them . Second, it treats components of the system as interrelated rather than separate, sometimes called the “environmental complex” . Third, it accounts for complex human interactions with the food system in addition to economic viability of the farm, such as equality and fair labor practices for on-farm labor and long-term environmental impacts of production processes .

To gather locally based data that links to larger geopolitical and agricultural processes, agroecology often uses the livelihoods framework, which aims to find institutional and organizational avenues to decrease poverty through access to resources and strategies or capabilities . In agroecology, livelihood research aims to understand how populations dependent on natural resources for their livelihoods are subject to wealth inequality and that diversifying asset bundles to absorb shocks, including diversifying cropping systems or income streams, can improve household stability . In this research, we treat migration and access to markets as capabilities in this livelihood bundle, and explore the household capitals to understand gendered practice adoption.The chapter proceeds in the following fashion. The first part of the chapter outlines the history of intensification of rice production in the Mekong River Delta of Vietnam, as well as the more recent efforts to downgrade use of inputs while sustaining yields. We also discuss how decisions to adopt more or less conventional intensification practices can be a gendered phenomenon. The second part of the chapter states the methods for data collection and analysis in this study. The third part outlines results of the household surveys and multivariate probit analysis. The fourth section discusses the implications of this research for sustainable intensification in Vietnam, gendered resource control, and female inclusion in trainings and decision making. We conclude with several policy recommendations and future directions for research.The Green Revolution was an international effort beginning in the 1960s to increase food security through agricultural intensification. Intensification is the process of increasing productivity from the same unit area of land. The Green Revolution achieved intensification through three primary efforts: improved seed varieties, mechanization of labor, and chemical inputs . In Vietnam, the Green Revolution did not take hold as early as other countries due to the war . In the Mekong River Delta of Vietnam, Green Revolution practices were ubiquitously adopted, including the extensive irrigation network and improved seed varieties. Irrigation and genetically modified seeds slowly became the standard in what we conceive of as conventional intensification agriculture in this chapter, which allows three rice crops per year. The plough and combine harvester are the primary machinery used in crop production in the MRD. Chemical inputs, such as pesticides and fertilizers, become necessary in a system that is highly productive for three crops per year due to nutrient depletion and increase pest pressure. The Green Revolution was touted as a solution to global food security issues, but criticized by many for its environmental and social costs. It has been criticized for being shortsighted and environmentally risky . With rapid urbanization and population growth, there was increased concern in the 1980s of expansion of agricultural land competing with conservation and food security efforts . In other words, conversion of native ecosystems to agricultural land was ecologically unsustainable.

Marketable and cull fruit yields were weighed separately by experienced harvesters

During the summer of 2001, romaine lettuce was grown on these fields by an organic vegetable grower and was harvested in early October 2001. Upon completion of the transition period and after discussions on experimental details with the grower, a replicated five-year rotation trial with 5 treatments was established in a 0.4 ha section of the ranch. The soil type of the fields is Santa Ynez fine sandy loam with 2–9% slopes. Water infiltration is very slow due to a claypan at 45–150 cm deep. Organic matter content in the topsoil is as low as 10 g kg−1 and the pH is 6.7–7.0 . The baseline V. dahliae population in the topsoil was below the detection limit. With a typical Mediterranean climate, the majority of rainfall is concentrated in winter . The average annual precipitation at the Elkhorn Slough National Estuarine Research Reserve was 498 mm and mean daily temperature ranged from 3.6 to 21.7◦C during the experiment.Fresh strawberry fruit yield was measured for each cultivar once or twice per week from 40 designated plants during the harvest period. Mortality of strawberry plants was examined from May to the end of the season by counting living and dead plants in the middle two beds of each plot. During year 5, 20 transplants were sampled on November 23, 2005, and 4 whole plants of strawberry were dug out from the middle two beds of each plot on January 8, March 13, June 2, and October 5 in 2006. After washing out soil from roots with running water, samples were dried at 60 ◦C in a convection oven to constant weight. The dried samples were weighed for dry biomass, plant bench indoor ground to pass through a 0.5-mm sieve with a Wiley mill, and analyzed for total C and total N content using a Vario Max CNS analyzer .

Twenty mature fruits from each plot were sampled on June 20, 2006, frozen at −20◦ C, and lyophilized by a freeze dryer. Freeze dried samples were passed through a 0.5-mm sieve and analyzed for total C and N content using the fore-mentioned method. Fresh moisture content of fruit was calculated from the difference between preand post-lyophilized weight of fruits. Cumulative total fruit yield, total N, and fresh moisture content of mature fruits were then used to estimate N uptake by fruit on June 2 and October 5, 2006, which was combined with whole plant biomass-N to calculate plant biomass N plus harvested fruit N on these dates. Since the focus was on strawberry disease management, broccoli wasmanaged as a cover crop for disease control in this experiment and broccoli floret yield was not recorded. About 1 kg of fresh broccoli and cover crop samples from each plot was brought to the lab and biomass and total C and N content were determined by the above-mentioned methods. Approximately 1 kg of composts and organic fertilizers used in the trial were sampled, oven dried, ground, and analyzed for total C and total N in the same manner. Also ∼100 g subsamples were taken to gravimetrically determine moisture content by heating at 105◦ C for 48 hours in a convection oven.Ten to twenty soil cores at 0–15 cm were sampled by soil probes at the middle two beds of each plot to make a composite sample. Samplings were conducted 2–3 times per year for the entire trial period. About 100 g of composite sample were air dried in the lab for six weeks and then ground and plated onto V. dahliae semi-selective medium using the Anderson Sampler dry sieve technique to test the number of viable microsclerotia of V. dahliae. NP-10 plates were incubated in the dark for three weeks, then examined for typical microsclerotia formation by using a dissecting microscope. During year 5 , topsoil and subsoil were sampled monthly in the same manner. After mixing well in a bucket, about 5 g of soil were taken from the composite sample and transferred on site into a pre-weighed screw top plastic tube containing 25 ml of 2M KCl.

Each tube was tightly sealed and kept on ice in an ice chest and transferred to the lab. In the lab, samples were reweighed, shaken for one hour with a reciprocal shaker, filtered to obtain clear supernatant, and kept at −20 ◦C until analysis. NH4-N and NO3-N concentrations in the KCl extracts were determined by flow injection analysis and the sum of NO3-N and NH4-N was expressed as inorganic N in the 0-30 cm soil layer. Separately, a ∼100 g subsample was taken from a composite soil sample to determine gravimetric soil moisture content. The remainder of soils sampled at the end of the strawberry harvest season in October 2006 were air dried, passed through a 2-mm sieve, and analyzed for pH and electrical conductivity . Ten gram subsamples were further ground to pass through a 0.5 mm sieve and analyzed for total C and N content using the above-mentioned methods. Soil bulk density was also determined at the end of the harvest season in October 2006 using the soil core method for 0–15 cm and 15–30 cm depths of the strawberry beds. Four cores were sampled per depth and bulked to form composite samples.During years 1 and 4, plants were regularly observed for disease symptoms by the grower and researchers and tested as needed. At the end of year 5, plants showing dieback, stunting, or collapsing symptoms were collected from all plots on October 5, 2006. For some of the Seascape plots, plants in general showed few signs of distress. Plants were taken to the lab and isolations made on roots and crowns. Plant material was first washed free of dirt and debris. Crowns and roots were then surface sanitized by soaking in a 0.1% bleach solution for 3 min and then thoroughly rinsing in sterile distilled water. Using aseptic technique, crowns and roots were dissected and symptomatic pieces of tissue were placed into separate Petri dishes containing acidified corn meal agar , PARP semiselective medium , or NP-10 . Isolations were evaluated on October 16, 2006.Prior to analysis of variance , data were log transformed to ful- fill ANOVA’s assumptions when needed. Split plot two-way ANOVA and cultivars as variables was applied to test statistical differences among treatments. Repeated measure analysis was conducted to examine treatment effects on V. dahliae population in soils. Tukey’s HSD post-hoc test at P = 0.05 was used for mean separation.

Regression analysis was conducted for examining relationships between marketable fruit yield of strawberries in year 5 and the length of break period between strawberries. Statistical analysis package Statistix was used for these analysis.Cultivars Aromas and Seascape were planted in all plots in year 5. Strawberries in all plots grew well without any major pest problems and disease symptoms. The average plant mortality was 1.2%, a level similar to the first four years, and no significant difference in mortality was found between any treatments. In the early harvest stage, rotation treatment did not affect fruit yield whereas, between cultivars, cv. Seascape had a higher yield compared to cv. Aromas . This is a common pattern in years 2 through 4 . In the mid-harvest season, however, yield was highest in treatment E followed by D , C , B , and A . A significant difference was found between treatments E and A. In the late season, no significant differences were found between any treatments. Overall cumulative marketable yield in year 5 ranged from 650 to 890 grams per plant, a comparable range with the first four years. Seascape had a significantly higher yield than Aromas , which was the opposite trend for years 2 through 4; yields for Seascape exceeded those for Aromas in mid- to late season in the previous three years but not for year 5. Among rotation treatments, though a significant difference was found only between treatments A and E, the longer the years between strawberries, the greater the marketable fruit yield. Since the interaction was not significant , greenhouse rolling racks cumulative marketable fruit yield of both cultivars were pooled and relative yields to the average of the seven- year rotation yield for both cultivars calculated. Then the correlation between the relative yields and the length of break periods were analyzed. The result showed a strong positive linear correlation between the two factors . Numerically, marketable yield in treatment A was approximately 20% lower than treatment E. Total fruit yield showed a similar trend with marketable yield ranging from 891 g per plant to 1093 g per plant .The loss of soil productivity when crops are grown repeatedly on the same land resulting in poor plant growth and reduced yields is called “yield decline” , “soil sickness” , or “replant problem” . Such losses, called here “yield decline,” have been reported in many crops worldwide including strawberries . Biotic and abiotic factors can cause yield decline. Further, although one factor may possibly be responsible for yield decline, it is more likely that a combination of factors interact to cause the effect . This study also demonstrated the challenges researchers face when using a participatory process where farmer involvement is a key part of the experiment. Farmers cannot wait until the end of a long-term experiment such as this before making adjustments in their farming practices. Changes occur as needed, and the experimental design needed to shift as well. This definitely added to our difficulty in pointing to single factors causing yield decline.

The present study showed that the integrated practices of broccoli residue incorporation, compost application, mustard cover crop incorporation, use of a relatively resistant cultivar, and rotation with non-host crops allowed the one- to three-year rotations to have a statistically similar marketable fruit yield to the seven-year rotation in organic strawberries grown in this region of coastal California. Although the trial was conducted in a field with a low baseline V. dahliae population and results may be different for fields with greater disease pressure, it should be noted that the V. dahliae population in the soil was kept low during the five-year study and no V. dahliae or other major pathogens were detected from strawberry plants at the end of year 5. The low V. dahliae population in the soil during the trial may be due to broccoli residue incorporations though the application rate of broccoli residues in this trial was lower than many farmers are using today. Nevertheless, a signifi- cant positive correlation of the break period between strawberries and the marketable fruit yield in year 5 existed , and marketable fruit yield in continuous strawberry plots was 20% lower than seven-year break plots . Given these results, the cause of the yield reduction at the shorter rotations appears to be something other than major soil-borne pathogens . It is known that other minor sublethal pathogens such as species of Colletotrichum, Pythium, Rhizoctonia, and Cylindrocarpon can reduce fruit yield of strawberries in California , which may have played a role in yield reduction of the shorter rotation treatments inthe trial. Further, Seigies and Pritts showed that three sowings of brown mustard cover crops enhanced successive strawberry growth compared to continuous strawberries regardless of relatively high levels of fungal infection on the strawberry roots. Use of mustard cover crops in the longer rotation treatments of the present study may have had a similar positive effect. Soil nutrient imbalance after continuous cropping can also cause yield decline. In the present study, fertility management changed from year to year in order to reduce N loss during the rainy winter and by the grower’s preference. To assure adequate nutrient status, we conducted tissue tests for strawberries and monitored soil inorganic N dynamics during the strawberries’ growth period. Although strawberry leaf blade analysis in the mid- and late season of year 5 showed differences in Mg and NO3-N among treatments, none of these appeared to be critically defi- cient. Further, even though plant tissues in shorter rotations showed lower petiole NO3-N, it is difficult to interpret whether it was due to the lack of N supply, reduced N absorption by roots infested by non-lethal pathogens, or both . In fact, no significant difference was found between any treatments in soil inorganic N content during year 5 , suggesting the lower total-N content in leaf blades may be attributed to the poor root health in shorter rotations rather than the lack of N supply.

These farmers generally found that this interview question missed the mark with regards to soil fertility

These in-depth interviews allowed us to ask the same questions of each farmer so that comparisons between interviews could be made. In person interviews were conducted in the winter, between December 2019 – February 2020; three interviews were conducted in December 2020. All interviews were recorded with permission from the farmer and lasted about 2 hours. To develop interview questions for the semi-structured interviews , we established initial topics and thematic sections first. We consulted with two organic farmers to develop final interview questions. The final format of the semi-structured interviews was designed to encourage deep knowledge sharing. For example, the interview questions were structured such that questions revisited topics to allow interviewees to expand on and deepen their answer with each subsequent version of the question. Certain questions attempted to understand farmer perspectives from multiple angles and avoided scientific jargon or frameworks whenever possible. Most questions promoted open ended responses to elicit the full range of possible responses from farmers. We used an openended, qualitative approach that relies on in-depth and in-person interviews to study farmer knowledge . In the semi-structured interview, farmers were asked a range of questions that included: their personal background with farming and the history of their farm operation, their general farm management approaches, as well as soil management approaches specific to soil health and soil fertility, such as key nutrients in their consideration of soil fertility, and their thoughts on soil tests . A brief in-person survey that asked several key demographic questions was administered at the end of the semistructured interviews. Interviews were transcribed, reviewed for accuracy, cannabis growing system and uploaded to NVivo 12, a software tool used to categorize and organize themes systematically based on research questions .

Through structured analysis of the interview transcripts, key themes were identified and then a codebook was constructed to systematically categorize data related to soil health and soil fertility . We summarize these results in table form.To unpack differences between Fields A and Fields B across all farms, we applied a multi-step approach. We first conducted a preliminary, global comparison between Fields A and Fields B across all farms using a one-way analysis of variance to determine if Fields A were significantly different from Fields B for each indicator for soil fertility. Then, to develop a basis for further comparison of Fields A and Fields B, we considered potential links between management and soil fertility. To do so, we developed a gradient among the farms using a range of soil management practices detailed during the initial farm visit. These soil management practices were based on interview data from the initial farm visit, and were also emphasized by farmers as key practices linked to soil fertility. The practices used to inform the gradient included cover crop application, amount of tillage, crop rotation patterns, crop diversity, the use of integrated crop and livestock systems , and the amount of N-based fertilizer application. Cover crop frequency was determined using the average number of cover crop plantings per year, calculated as cover crop planting counts over the course of two growing years for each field site. Tillage encompassed the number of tillage passes a farmer performed per field site per season. To quantify crop rotation, a rotational complexity index was calculated for each site using the formula outlined by Socolar et al. . To calculate crop diversity, we focused on crop abundance, the total number of crops grown per year at the whole farm level was divided by the total acreage farmed.

To determine ICLS, an index was created based on the number and type of animals utilized . Lastly, we calculated the amount of additional N-based fertilizer applied to each field . In order to group, visualize, and further explore links with indicators for soil fertility, all soil management variables were standardized , and then used in a principal components analysis using the factoextra package in R . In short, these independent management variables were used to create a composite of several management variables. Principal components with eigenvalues greater than 1.0 were retained. To establish the gradient in management, we plotted all 13 farms using the first two principal components,and ordered the farms based on spatial relationships that arose from this visualization using the nearest neighbor analysis . To further explore links between management and soil fertility, we used the results from the PCA to formalize a gradient in management across all farms, and then used this gradient as the basis for comparison between Field A and Field B across all indicators for soil fertility. Using the ggplot and tidyverse packages , we displayed the difference in values between Field A and Field B for each indicator for soil fertility sampled at each farm using bar plots. We also included error bars to show the range of uncertainty in these indicators for soil fertility. Lastly, we further compared Field A and Field B for each farm using radar plots. To generate the radar plots, we first scaled each soil indicator from 0 to 1. Using Jenks natural breaks optimization, we then grouped each farm based on low, medium, and high N-based fertilizer application, as this soil management metric was the strongest coefficient loading from the first principal component . Using the fmsb package in R , we used an averaging approach for each level of N-based fertilizer application to create three radar plots that each compared Field A and Field B across the eight indicators for soil fertility.Farmer responses for describing key aspects of soil health were relatively similar and overlapped considerably in content and language . Specifically, farmers usually emphasized the importance of maintaining soil life and/or soil biology, promoting diversity, limiting soil compaction and minimizing disturbance to soil, and maintaining good soil structure and moisture.

Several farmers also touched on the importance of using crops as indicators for monitoring soil health and the importance of limiting pests and disease. Discussion of the importance of promoting soil life, soil biology, and microbial and fungal activity had the highest count among farmers with ten mentions across the 13 farmers interviewed. Next to this topic, minimizing tillage and soil disturbance was the second most discussed with six of 13 farmers highlighting this key aspect of soil health. The importance of crop health as an indicator for soil health also surfaced for five out of 13 farmers. In addition to discussing soil health more broadly, farmers also provided in-depth responses to a series of questions related to soil fertility—such as key nutrients of interest on their farm, details about their fertility program, and the usefulness of soil tests in their farm operation— summarized in Table 2. When asked to elaborate on the extent to which they considered key nutrients, a handful of farmers readily listed several nutrients, including nitrogen, phosphorous, potassium , and other general macronutrients as well as one micronutrient . Among these farmers that responded with a list of key nutrients, some talked about having their nutrients “lined up” as part of their fertility program. This approach involved keeping nutrients “in balance,” such as for example, monitoring pH to ensure magnesium levels did not impact calcium availability to plants. These farmers also emphasized that though nitrogen represented a key nutrient and was important to consider in their farm operation, flood table tracking soil nitrogen levels was less important than other aspects of soil management, such as promoting soil biological processes, maintaining adequate soil moisture and aeration, or planting cover crops regularly. As one farmer put it, “if you add nutrients to the soil, and the biology is not right, the plants will not be able to absorb it.” Or, as another farmer emphasized, “It’s not about adding more [nitrogen]… I try to cover crop more too.” A third farmer emphasized, that “I don’t use any fertilizers because I honestly don’t believe in adding retroactively to fix a plant from the top down.” This same farmer relied on planting a cover crop once per year in each field, and discing that cover crop into the ground to ensure his crops were provided with adequate nitrogen for the following two seasons. While most farmers readily listed key nutrients, several farmers shifted conversation away from focusing on nutrients. One farmer responded, “I’m not really a nutrient guy.” This same farmer added that he considered [soil fertility] a soil biology issue as much as a chemistry issue.” The general sentiment among these farmers emphasized that soil fertility was not about measuring and “lining up” nutrients, but about taking a more holistic approach.

This approach focused on facilitating conditions in the soil and on-farm that promoted a soil-plant-microbe environment ideal for crop health and vigor. For example, the same farmer quoted above mentioned the importance of establishing and maintaining crop root systems, emphasizing that “if the root systems of a crop are not well established, that’s not something I can overcome just by dumping more nitrogen on the plants.”Another farmer similarly emphasized that they simply created the conditions for plants to “thrive,” and “have pretty much just stepped back and let our system do what it does; specifically, we feed our chickens whey-soaked wheat berries and then we rotate our chickens on the field prior to planting. And we cover crop.” A third farmer also maintained that their base fertility program—a combination of planting a cover crop two seasons per year, an ICLS chicken rotation program, minimal liquid N-based fertilizer addition, and occasionally compost application—all worked together to “synergize with biology in the soil.” This synergy in the soil created by management practices—rather than focusing on nutrient levels—guided this farmer’s approach to building and assessing soil fertility on-farm. Another farmer called this approach “place-based” farming. This particular farmer elaborated on this concept, saying “I think the best style of farming is one where you come up with a routine [meaning like a fertility program] that uses resources you have: cover crops, waste materials beneficial to crops, animals” in order to build organic matter, which “seems to buffer some of the problems” that this farmer encountered on their farm. Similar to other farmers, this farmer asserted that adding more nitrogen-based fertilizer did not lead to better soil fertility or increase yields, in their direct experience. Regardless of whether farmers listed key nutrients, a majority of farmers voiced that nitrogen was not a big concern for them on their farm. This sentiment was shared among most farmers in part because they felt the amount of nitrogen additions from fertilizers they added were insignificant compared to nitrogen additions by conventional farms. Farmers also emphasized that the amount of nitrogen they were adding was not enough to cause environmental harm; relatedly, a few farmers noted the absurdity and added economic burden of the recent nitrogen management plan requirements—specifically among organic farms with very low N-based fertilizer application. The majority of farmers also expressed that their use of cover crops and the small amount of N-based fertilizer additions as part of their soil fertility program ensured on-farm nitrogen demands were met for their crops. Across all farmers interviewed, cover cropping served as the baseline and heart of each fertility program, and was considered more effective than additional N-based fertilizers at maintaining and building soil fertility. Farmers used a range of cover crop species and often applied a mix of cover crops, including vetches and other legumes like red clover and cowpea , grains and cereals like oats . Farmers cited several reasons for the effectiveness of cover cropping, such as increased organic matter content, more established root systems, greater microbial activity, better aeration and crumble in their soils, greater number of earthworms and arthropods, improved drainage in their soils, and more bio-available N. Whereas farmers agreed that “more is not better” with regards to N-based fertilizers, farmers did agree that allocating more fields for planting cover crops over the course of the year was beneficial in terms of soil fertility. However, as one farmer pointed out, while cover crops provide the best basis for an effective soil fertility program, this approach is not always economically viable or physically possible.Several farmers expressed concern because they often must allocate more fields to cover crops than cash crops in any given season, which means that their farm operation requires more land to be able to produce the same amount of vegetables than if they had all their fields in cash crops.

A Euclidean-based dendrogram analysis was then used to further validate the results of the cluster analysis

Using farm typologies identified, we examined the extent to which soil texture and/or soil management practices influenced these measured soil indicators across all working organic farms, using Linear Discriminant Analysis and Variation Partitioning Analysis . We then determined the extent to which gross N cycling rates and other soil N indicators differed across these farm types. Lastly, we developed a linear mixed model to understand the key factors most useful for predicting potential gross N cycling rates along a continuous gradient, incorporating soil indicators, on-farm management practices, and soil texture data. Our study highlights the usefulness of soil indicators towards understanding plant-soil-microbe dynamics that underpin crop N availability on working organic farms. While we found measurable differences among farms based on soil organic matter, strongly influenced by soil texture and management, these differences did not translate for N cycling indicators measured here. Though N cycling is strongly linked to soil organic matter, indicators for soil organic matter are not strong predictors of N cycling rates.During the initial field visits in June 2019, two field sites were selected in collaboration with farmers on each participating farm; these sites represented fields in which farmers planned to grow summer vegetables. Therefore, only fields with all summer vegetable row crops were selected for sampling. At this time, farmers also discussed management practices applied for each field site, including information about crop history and rotations, bed prepping if applicable, tillage, organic fertilizer input, and irrigation . Because of the uniformity of long-term management at the field station , weed trimming tray only one treatment was selected in collaboration with the Cropping Systems Manager—a tomato field in the organic corn-tomato-cover crop system.

Since the farms involved in this study generally grew a wide range of vegetable crops, we designed soil sampling to have greater inference space than a single crop, even at the expense of adding variability. Sampling was therefore designed to capture indicators of nitrogen cycling rates and nitrogen pools in the bulk soil at a single time point. Fields were sampled mid-season near peak vegetative growth when crop nitrogen demand is the highest. Using the planting date and anticipated harvest date for each crop, peak vegetative growth was estimated and used to determine timing of sampling. We collected bulk soil samples that we did not expect to be strongly influenced by the particular crop present. This sampling approach provided a snapshot of on-farm nitrogen cycling. Field sampling occurred over the course of four weeks in July 2019. To sample each site, a random 10m by 20m transect area was placed on the field site across three rows of the same crop, away from field edges. Within the transect area, three composite samples each based on 5sub-samples were collected approximately 30cm from a plant at a depth of 20cm using an auger . Subsamples were composited on site, and mixed thoroughly by hand for 5 minutes before being placed on ice and immediately transported back to the laboratory. To determine bulk density , we hammered a steel bulk density core sampler approximately 30cm from a plant at a depth 20cm below the soil surface and recorded the dry weight of this volume to calculate BD; we sampled three replicates per site and averaged these values to calculate final BD measurements for each site.Soil samples were preserved on ice until processed within several hours of field extraction. Each sample was sieved to 4mm and then either air dried, extracted with 0.5M K2SO4, or utilized to measure net and gross N mineralization and nitrification . Air dried samples were measured for gravimetric water content and BD.

Gravimetric water content was determined by drying fresh soils samples at 105oC for 48 hrs. Moist soils were immediately extracted and analyzed colorimetrically for NH4 + and NO3 – concentrations using modified methods from Miranda et al. and Forster . Additional volume of extracted samples were subsequently frozen for future laboratory analyses. To determine soil textural class, air dried samples were sieved to 2mm and subsequently prepared for analysis using the “micropipette” method . Water holding capacity was determined using the funnel method, adapted from Geisseler et al. , where a jumbo cotton ball thoroughly wetted with deionized water was placed inside the base of a funnel with 100g soil on top. Deionized water was added and allowed to imbibe into the soil until no water dripped from the funnel. The soil was allowed to drain overnight . A subsample of this soil was then weighed and dried for 48 hours at 105oC. The difference following draining and oven drying of a subsample was defined as 100% WHC. Air dried samples were sieved to 2mm, ground, and then analyzed for total soil N and total organic C using an elemental analyzer at the Ohio State Soil Fertility Lab ; additional soil data including pH and soil protein were also measured at this lab. Soil protein was determined using the autoclaved citrate extractable soil protein method outlined by Hurisso et al. . Additional air-dried samples were sieved to 2mm, ground, and then analyzed for POXC using the active carbon method described by Weil et al. , but with modifications as described by Culman et al. . In brief, 2.5g of air-dried soil was placed in a 50mL centrifuge tube with 20mL of 0.02 mol/L KMnO4 solution, shaken on a reciprocal shaker for exactly 2 minutes, and then allowed to settle for 10 minutes. A 0.5-mL aliquot of supernatant was added to a second centrifuge tube containing 49.5mL of water for a 1:100 dilution and analyzed at 550 nm.

The amount of POXC was determined by the loss of permanganate due to C oxidation .To measure gross N mineralization and nitrification in soil samples, we applied an isotope pool dilution approach, adapted from Braun et al. . This method is based on three underlying assumptions listed by Kirkham & Bartholomew : 1) microorganisms in soil do not discriminate between 15N and 14N; 2) rates of processes measured remain constant over the incubation period; and 3) 15N assimilated during the incubation period is not remineralized. To prepare soil samples for IPD, we adjusted soils to approximately 40% WHC prior to incubation with deionized water. Next, four sets of 40g of fresh soil per subsample were weighed into specimen cups and covered with parafilm. Based on initial NH4 + and NO3 – concentrations determined above, a maximum of 20% of the initial NH4 + and NO3 – concentrations was added as either 15N-NH4 + or 15N-NO3 – tracer solution at 10 atom%; the tracer solution also raised each subsample soil water content to 60% WHC. This approach increased the production pool as little as possible while also ensuring sufficient enrichment of the NH4 + and NO3 – pools with 15N-NH4 + and 15N-NO3, respectively, to facilitate high measurement precision . Due to significant variability of initial NH4 + and NO3 – pool sizes in each soil sample, differing amounts of tracer solution were added to each sample set evenly across the soil surface. To begin the incubation, each of the four subsamples received the tracer solution via evenly distributed circular drops from a micropipette. The specimen cups were placed in a dark incubation chamber at 20oC. After four hours , two subsample incubations were stopped by extraction with 0.5M K2SO4 as above for initial NH4 + and NO3 – concentrations. Filters were pre-rinsed with 0.5 M K2SO4 and deionized water and dried in a drying oven at 60°C to avoid the variable NH4 + contamination from the filter paper. Soil extracts were frozen at -20°C until further isotopic analysis. Similarly after 24 hrs , cannabis grow setup two subsample in cubations were stopped by extraction as previously detailed, and subsequently frozen at -20°C. At a later date, filtered extracts were defrosted, homogenized, and analyzed for isotopic composition of NH4 + and NO3 – in order to calculate gross production and consumption rates for N mineralization and nitrification. We prepared extracts for isotope ratio mass spectrometry using a microdiffusion approach based on Lachouani et al. . Briefly, to determine NH4 + pools, 10mL aliquots of samples were diffused with 100mg magnesium oxide into Teflon coated acid traps for 48 hours on an orbital shaker. The traps were subsequently dried, spiked with 20μg NH4+ -N at natural abundance to achieve optimal detection, and subjected to EA-IRMS for 15N:14N analysis of NH4 + . Similarly, to determine NO3 – pools, 10mL aliquots of samples were diffused with 100mg magnesium oxide into Teflon coated acid traps for 48 hours on an orbital shaker. After 48 hours, acid traps were removed and discarded, and then each sample diffused again with 50mg Devarda’s alloy into Teflon coated acid trap for 48 hours on an orbital shaker. These traps were dried and subjected to EA-IRMS for 15N:14N analysis of NO3 + .

Twelve dried samples with very low spiked with 20μg NH4+ -N at natural abundance to achieve optimal detection.In order to identify farm typologies based on indicators for soil organic matter levels, we first used several clustering algorithms. First, a k-means cluster analysis based on four key soil indicators—soil organic matter , total soil nitrogen, and available nitrogen —was used to generate three clusters of farm groups using the facoextra and cluster packages in R . The cluster analysis results were divisive, nonhierarchical, and based on Euclidian distance, which calculates the straight-line distance between the soil indicator combinations of every farm site in Cartesian space , and created a matrix of these distances . To determine the appropriate number of clusters for the cluster analysis, a scree plot was used to signal the point at which the total within-cluster sum of squares decreased as a function of the increasing cluster size. The location of the kink in the curve of this scree plot delineated the optimal number of clusters, in this case three clusters . To further explore appropriate cluster size, we used a histogram to determine the structure and spread of data among clusters. In addition to confirming the results of the cluster analysis, the dendrogram plot showed relationships between sites and relatedness across all sites. To visual cluster analysis results, the final three clusters were plotted based on the axes produced by the cluster analysis.One drawback of cluster analyses is that there is no measure of whether the groups identified are the most effective combination to explain clusters produced by soil indicators, or whether they are statistically different from one another. To address this gap, we used ANOSIM to evaluate and compare the differences between clusters identified with the cluster analysis above. We calculated the global similarity in addition to pairwise tests of each cluster. To formally establish the three farm types and also make the functional link between organic matter and management explicit, we used the three clusters that emerged from the k-means cluster analysis based on soil organic matter indicators, and explored differences in management approaches among the clusters. We then created three farm types based on this exploratory analysis. Specifically, we first analyzed management practices among sites within each cluster to determine if similarities in management approaches emerged for each cluster. Based on this analysis, we used the three clusters from the cluster analysis to create three farm types categorized by soil organic matter levels and informed by management practices applied.Using the three farm types from above, we then analyzed whether our classification created strong differences along soil texture and management gradients using a linear discriminant analysis . LDA is most frequently used as a pattern recognition technique; because LDA is a supervised classification, class membership must be known prior to analysis . The analysis tests the within group covariance matrix of standardized variables and generates a probability of each farm sites being categorized in the most appropriate group based on these variable matrices . To characterize soil texture, we used soil texture class . To characterize soil management, we used crop abundance, tillage frequency, and crop rotational complexity—the three management variables with the strongest gradient of difference among the three farm types. A confusion matrix was first applied to determine if farm sites were correctly categorized among the three clusters created by the cluster analysis. Additional indicator statistics were also generated to confirm if the LDA was sensitive to input variables provided. A plot with axis loadings is provided to visualize the results of the LDA and display differences across farm groups visually. The LDA was carried out using the MASS R package.

Knowledge of the mechanism that underlies the increase in abuse is important for theory and policy

We argue that backlash models reinterpreted in an economic framework do not necessarily “ignore the individual rationality constraints faced by women” , but rather take seriously an additional motive on the part of men – that of restoring a self-image of dominance in the household to which they may feel entitled, for example due to cultural norms. A similar theory, in an instrumental framework, would be that men use violence to attempt to address unwanted female behavior associated with employment. The paper is organized as followed. Section 2 describes the rural Ethiopian context and the experiment. In section 3 the main treatment effects are presented and analyzed in light of existing domestic violence models.Ethiopia has some of the highest poverty, illiteracy and underemployment rates in Africa, especially for women. Domestic violence is unusually prevalent; for example, 54 percent of women in a provincial site surveyed by the WHO report to have been victimized by a partner during the last year . At least until recently, a role for domestic violence was accepted in Ethiopian culture – even by many women. In a nationally representative survey conducted in 2005, 81 percent of Ethiopian women found it justified for a husband to beat his wife if the wife had violated norms . In recent years it has become more common for Ethiopian women to hold formal jobs. In rural areas an important contributing factor has been the explosive rise of the floriculture sector, plant benches which mostly employs women. In 2008, 81 flower farms in Ethiopia employed around 50,000 workers . Hiring on Ethiopian flower farms typically takes place in October and November, before the main growing and harvesting season.

The supervisors on five flower farms agreed to randomize job offers during the fall 2008 hiring season because of an unusual situation in the labor market for flower farm workers. At the time, applicants almost always outnumbered the positions to be filled by large margins. Ethiopian flower farms – still getting to grips with cost components significantly larger than labor, and with little ability to predict the productivity of the mostly uneducated, illiterate and inexperienced applicants – did not prioritize optimization of the unskilled workforce . Because supervisors were already allocating job offers relatively arbitrarily when approached by the researchers, explicit randomization was a modest procedural change.The five farms are located in rural areas two and a half to five hours from Addis Ababa and employ local workers who live in small towns nearby the farms. On hiring days, supervisors first excluded any unacceptable applicants. A team of enumerators then carried out the baseline survey with the remaining applicants. Finally, the names of the number of workers to be hired were drawn randomly from a hat. The sample thus consists of 339 households in which a woman applied to a flower farm job and was deemed acceptable for hiring; we focus on the 329 households in which the applicant was married or living with a steady partner. We attempted to re-interview everyone in the treatment and control groups 5 – 7 months after employment commenced. Careful tracking procedures led to a re-interview rate of 88 percent and no statistically significant differential attrition. Summary statistics are displayed in table 1. There are no statistically significant differences between the characteristics of the treatment and control groups. Literacy rates are low. Almost all the applicants are parents. Income and wealth indicators, such as the material that the applicant’s floor is made of, indicate the severe poverty of the sample. Flower farm employment typically entails six days of full-time work a week, totaling on average 202 hours per month. The alternative for the women in our sample was typically domestic work, and perhaps a few hours of informal paid work per week.

The applicants randomly chosen for employment spent 102 more hours per month working . The income of treated women increased by 154 percent on average, which translates into a 28 percent increase in total household income.The estimated treatment effect are in table 3. The probability of experiencing physical violence increases by 8 percentage points or 13 percent when a woman gets employed in rural Ethiopia. There is also a 19 percentage point or 34 percent increase in emotional abuse. Finally, the intensive margin of violence is affected: the number of violent incidents experienced per month goes up by 0.31 or 32 percent following employment. An alternative interpretation of these results is that employment affects women’s willingness to report violence to an enumerator rather than, or in addition to, violence itself. While we cannot rule out a reporting effect, greater willingness to report violence after employment is unlikely to represent the primary explanation of our findings. Specific, detailed survey questions were used. As noted above, the majority of both men and women in Ethiopia find domestic violence justifiable in some situations, and 63 percent of women in our sample were comfortable reporting abuse at baseline. The prediction that physical abuse will decrease when women are “empowered” by employment is central to the most-cited domestic violence models. The estimates in table 3 represent strong evidence against such models, in the context of rural Ethiopia. In the next two sub-sections we categorize pessimistic models on the basis of the hypothesized male motivation for abuse, and explore the ability of different categories of pessimistic models to explain our findings.This paper’s primary result is that domestic violence increases significantly when women get employed in rural Ethiopia.

It appears that there are two categories of models that may be able to explain our results: expressive models in which a husband’s marginal utility from violence is increasing in the economic standing of his wife, and instrumental models in which violence is used to achieve male goals other than control over household resources. We consider these two possibilities in turn. Aizer is an example of an influential class of expressive domestic violence models in which men derive utility directly from violence. Women with better options outside of marriage should be willing to accept less violence at a given “price”: employment is predicted to shift a woman’s violence “supply curve” up and thus decrease violence. Consider, however, that a husband’s violence “demand curve” may also shift up when his wife gets employed, if the husband’s marginal utility from violence is increasing in the wife’s relative or absolute economic standing. The net outcome may be that the couple’s contract curve – the set of feasible bargaining solutions – shifts up in space, and that violence itself therefore increases. Why would the marginal utility that men in Ethiopia derive from violence go up when women get employed? Suppose that there are emotional costs to men of perceived violations of traditional gender roles. In that case “violence may be a means of reinstating [a husband’s] authority over his wife” . If improvements in women’s economic standing carry emotional costs to men, events that symbolize the perceived challenge to traditional gender roles can likely lead to violence. In columns two and four of table 4 we interact the treatment indicator with the wife’s ex ante income as a share of the combined income of the husband and wife. The results show that the impact of employment on violence is bigger in households in which the newly employed wife is likely to end up further ahead of her husband in income because her share of baseline income was high relative to that of other women in the sample. The increase in the probability of violence when a wife gets employed is seven percentage points higher for every one standard deviation increase in the wife’s share of baseline income, almost as much as the average effect. There is also a small but marginally significant increase in male labor supply when women get employed in rural Ethiopia. Though alternative explanations are possible, these results are consistent with a plausible story in which improvements in the relative economic standing of women carry emotional costs to men; costs that some men choose to act upon through violence. A similar possibility is that violence serves an instrumental purpose, rolling benches but is used not to gain control over household resources but instead to influence the behavior of wives . Husbands may see some dimensions of female behavior associated with employment as undesirable and potentially “correctable” through violence. The arguably most plausible “real” cost to husbands of female employment is that employed wives devote less time to house-work. In our sample, most of the housework of women randomly chosen for employment is taken over by daughters , however. This suggests that costs to husbands of a reallocation of women’s time may, if anything, be due the overturning of traditional responsibilities in the household, rather than house-work being left undone.

In sum, the evidence presented here suggests that emotional costs associated with violations of traditional gender roles belong in theories of domestic violence in gender-unequal societies. If so, identity models, in which disutility is associated with a self-image that de-viates from the individual’s view of his or her “appropriate” role in the household, are a natural starting point . In the appendix we present an example of a framework in which a husband’s incentive to engage in violence depends on his wife’s economic standing relative to his own – as does, in turn, the wife’s response to violence. The framework allows a male “backlash” when women get employed and predicts how domestic violence responds to female employment in Ethiopia well.This paper has analyzed the impact of female employment on domestic violence through a field experiment in which women’s long-term job offers on Ethiopian flower farms were randomized. We estimate a significant 13 percent increase in physical violence when women get employed, as well as large increases in emotional abuse and the intensity of physical violence. These results put into question the relevance of conventional economic models of domestic violence in male-dominated developing countries. Like much existing anti-violence policy, conventional models are “optimistic” in the sense of considering labor force participation a promising route to empowering women and reducing the prevalence of domestic violence. Most “pessimistic” models argue that physical abuse can increase when employment enhances wives’ incomes and bargaining power because husbands use violence as a tool to get access to and control over household resources. But we find no significant correlation between levels of violence and control over household resources, nor changes in violence and control when women get employed, and the reason does not appear to be that violence is used to counteract female bargaining power. Rather than a male quest for control over household resources, it appears that the models that best explain our results would allow men to care about roles in the household deviating from the roles prescribed by traditional norms, and violence being seen as a way to restore a preferred order. We find that the increase in the probability of violence following female employment is greater in households in which the newly employed woman is likely to end up further ahead of her husband in income. The costs to a husband of lost economic dominance are presumably primarily emotional, suggesting that the benefits of turning to violence in response may also be emotional. It may be that men derive “expressive” utility from violence and, while a woman’s “violence supply curve” likely shifts up when her outside option improves, her husband’s “violence demand curve” also shifts up because his marginal utility from violence depends on his wife’s relative economic standing. A similar “instrumental violence” interpretation would be that men abuse their wives not to achieve financial control but rather, for example, to influence their wives’ behavior in the household. We conclude that: conventional optimistic economic models of domestic violence are unlikely to accurately describe the situation in most households in male-dominated developing countries such as Ethiopia; and not all men will passively accept challenges to their economic dominance, and successful models of domestic violence will likely need to account for the male reaction to female economic progress. Finally, it is worth emphasizing that the increase in domestic violence we observe when women get employed does not mean that women are not empowered by employment. Forexample, it may be that some women previously acquiesced in the face of demands from their husbands but choose not to when emboldened by employment.

The effect of conflict on discriminatory workplace behavior does not decay in the nine months after conflict ended

So far we have seen that output in factory production in Kenya is lower when individuals of different ethnic backgrounds work together, and that the reason appears to be that biased upstream workers under supply downstream workers of other ethnic groups and misallocate intermediate goods across coethnic and non-coethnic downstream workers. We have also seen that distortionary workplace discrimination is greater durings times of conflict, and that firms introduce policies in response in order to reduce workers’ incentive to discriminate. By studying how discriminatory preferences are shaped, and how firms choose their response to distortionary discrimination, researchers can go beyond identifying a source of ethnic diversity effects in production and begin to address why those effects vary across space and time and how profit motives in the private sector can reduce the aggregate effect of ethnic diversity. In the model of taste-based discrimination above, the impact of conflict on output in diverse teams should persist for as long as attitudes towards workers of other ethnic groups are affected. Periods of increased antagonism may entail significant hidden economic costs if “mean reversion” in taste for discrimination is slow . The evolution of output in teams of different ethnicity configurations across the three sample periods was depicted in figure 2. After the introduction of team pay, average output in both homogeneous and mixed teams was steady for the remainder of the sample period, cannabis grow setup suggesting that the impact of conflict on social preferences was long-lived.How did the response to conflict of distortionary discrimination at work vary across individuals?

Modeling θC and θNC as parameter values shared by all workers is a simplification: in reality some workers will have a higher taste for discrimination than others. Figure 9 plots the distribution, across individual suppliers, of the difference in output between homogeneous and mixed teams supplied, before and after conflict began. It appears that most suppliers discriminate against non-coethnic processors during the pre-conflict period. Conflict led to an increase in the output gap between homogeneous and mixed teams supplied for most upstream workers, but also to a notable widening of the distribution of the output gap. The figure indicates that some upstream workers respond more to conflict than others, differentially increasing the extent to which they discriminate against non-coethnics downstream. Some workers in the sample were more exposed to the conflict period of early 2008 than others. Though the workers at the plant and their co-habitating family-members were not themselves directly affected, 22 percent of workers report to have “lost a relative” during the conflict. The decrease in output in mixed teams when conflict began was significantly greater in teams supplied by such workers, as seen in columns 1 and 2 of table 10. These results indicate that personal grievances exacerbate individuals’ workplace response to conflict. Younger individuals may have more malleable social preferences. In columns 3 and 4 of table 10 we see that, although output in homogeneous teams led by old and young suppliers was similar, output in mixed teams with young suppliers was significantly higher during the first year of the sample period. Young suppliers were less discriminatory towards noncoethnic co-workers than old suppliers before conflict began, it appears. This finding is consistent with an expectation expressed by many Kenya commentators before 2008.

It was argued that the young coming of age at the time would be the country’s first “post-tribal” generation . The results of table 10 also show that the decrease in output in mixed teams when conflict began was significantly greater in teams with young suppliers, however. Output in mixed teams with young suppliers was no higher than in mixed teams with older suppliers during the conflict period. These results suggest that youth start out relatively tolerant, but that the attitudes of the young towards non-coethnics respond more negatively to conflict. The results discussed in this section paint a consistent picture of how distortionary attitudes towards workers of other ethnic groups respond to ethnic conflict. It appears that conflict may entail significant hidden economic costs because distortionary social preferences are updated in a “Bayesian” fashion when conflict occurs, at least in the Kenyan context. A serious episode of violent, political conflict between the Kikuyu and Luo blocs led to a significant shift in the average weight attached to the well-being of non-coethnics, a shift that did not decay in the nine months after conflict ended. The negative response was greater among those more affected and among those likely to have a less cemented “prior”.Segregating workers of different ethnic groups would appear to be the profit-maximizing response to distortionary discrimination, from the viewpoint of the econometrician. The results in tables 4 and 8 suggest that segregation would have increased plant productivity by four percent before conflict and by eight percent after conflict began, relative to the status quo of arbitrary assignment to teams. Are these expected benefits of a magnitude that is likely to be salient to supervisors? Consider the output increase expected from optimally assigning workers to teams and positions based on ethnicity, productivity or both. If we view a worker as having three characteristics – the tercile to which she belongs in the distribution of processor productivity, the tercile to which she belongs in the distribution of supplier productivity, and her ethnicity – then an average output will be associated with teams of each of 3 ethnicity configurations, 18 productivity configurations and 63 ethnicity productivity configurations. 

In theory, supervisors can then solve the linear programming problem of maximizing total output subject to the expected output associated with a given type of team and the “budget set” of workers available . The optimal assignments and associated expected output gains are shown in table 11. Throughout the period observed, the output gains expected from assigning workers to teams based on ethnicity were larger than those expected from assigning workers based on productivity – twice as large during the conflict period. In fact segregation achieves about half the output gains of the “complete” solution. The complete solution assigns workers optimally to fully specified teams and thus takes into account interactions between the three workers’ ethnicities and productivities – a complicated “general equilibrium” problem that is likely infeasible for supervisors to solve. It thus appears that the expected productivity gain of segregation is sizable relative to the expected effect of changing other comparable factors under supervisors’ control.It is possible that a similar effect occurs in a Kenyan workplace, although in a situation in which mixed teams are characterized by discriminatory behavior it is also possible that interaction increases tensions and exacerbates ethnic biases. To investigate, I compare the behavior of suppliers with greater versus lower experience working with non-coethnics, in table 12. Focusing on output during the second half of 2007 and the first six weeks of 2008, I contrast teams with suppliers with above-average versus below-average time spent in mixed teams during the first half of 2007. Because most workers at the farm had already spent significant time working with non-coethnics before 2007, columns 3 and 4 restrict the sample to those with below-average tenure. The results show no significant effect of time spent working with non-coethnics on the output gap between mixed and homogeneous teams supplied, vertical grow system neither before nor after conflict began. Workers who have interacted more with individuals of other ethnic groups thus appear no less discriminatory in production. The results in table 12 do not rule out the possibility that complete segregation between the two ethnic groups over time would have a negative influence on attitudes or behavior towards non-coethnics, however. Carrell, Sacerdote, and West find that implementing an estimated optimal assignment can have unintended consequences due to unforeseen responses on the part of individuals to out-of-sample assignments. In the context of the sample farm, in a country that has experienced periodical violent clashes between ethnic groups, and where workers of different ethnic groups reside in the same quarters, complete segregation at the plant could for example lead to increased social tensions on the farm.Nevertheless, it is arguably surprising that a supposedly profit-maximizing firm chose to leave large productivity gains “on the table” by not segregating workers of different ethnicities. Ethical considerations add complexity to the issue of team assignment in Kenya, but we would perhaps expect longer-term costs of segregation to be incurred primarily by society, rather than the firm itself, in which case a case can be made for government intervention to enforce integration within firms. Becker pointed out that discriminatory employers should go out of business as their profits suffer.

A priori, the same argument should hold for flower farms that allow workplace discrimination to influence productivity. However, the floriculture business is not particularly competitive, as evidenced by high profit margins . Moreover, as the literature in macroeconomics on across-firm misallocation has highlighted, it is not necessarily the most productive firms that survive in poor countries’ economies . Further, plant managers did respond to the increase in distortionary discrimination when conflict began, as we have seen. The introduction of team pay for processors was likely motivated by the decrease in productivity in diverse teams in early 2008. It is unsurprising that the dramatic differential decrease in mixed teams’ output when conflict began led managers to respond, even though the lower output observed in diverse teams during the first year of the sample period did not. A doubling of the output gap of diverse teams during a short period of time is likely more salient to managers than potential foregone productivity gains from arbitrary assignment to teams. It appears that managers considered an adjustment to contractual incentives a more desirable response to distortionary discrimination than segregating workers. But note that it is likely not possible to eliminate discrimination through contractual incentives, without entirely breaking the link between workers’ output and pay. At the sample plant, vertical discrimination continued to significantly affect output after the introduction of team pay.Evidence suggests that ethnic diversity negatively affects public goods provision and the quality of macroeconomic policies. While the possibility of an additional, direct effect on micro-level productivity has long been recognized, corresponding evidence is largely absent. In this paper, I begin by identifying a sizable, negative productivity effect of ethnic diversity in teams in Kenya. I do so using two years of daily output data for 924 workers, almost equally drawn from two rival tribes, at a flower-packing plant. The packing process takes place in triangular production units, one upstream “supplier” supplying two downstream “processors” who finalize bunches of flowers. I show that an arbitrary position rotation system led to quasi-random variation in teams’ ethnicity configuration. As predicted by a model in which different weight is attached to coethnic and non-coethnic downstream workers’ utility, suppliers discriminate both “vertically” – undersupplying downstream noncoethnics – and “horizontally” – shifting flowers from non-coethnic to coethnics downstream workers. By doing so, upstream workers lower their own pay and total output. I show that less distortionary, non-taste-based ethnic diversity effects are unlikely to explain this paper’s results. As Becker points out, significant aggregate effects “could easily result from the manner in which individual tastes for discrimination allocate resources within a free-enterprise framework” . Discrimination should lead to misallocation of resources in most joint production situations in which individuals influence the output and income of others. I take advantage of two natural experiments during the time period observed to begin to explore how the productivity effects of ethnic diversity are likely to vary across time and space. When contentious presidential election results led to political conflict and violent clashes between the two ethnic groups represented in the sample in early 2008, a dramatic, differential decrease in the output of mixed teams followed, as predicted by themodel. The reason appears to be that workers’ taste for discrimination against non-coethnic co-workers increased. I estimate a decrease in the weight attached to non-coethnics’ utility of approximately 35 percent in early 2008, through a reduced form approach. A back-of the-envelope calculation suggests that the increase in distortionary workplace discrimination may have cost the plant half a million dollars in annual profit, had it not responded. Six weeks into the conflict period, the plant implemented a new pay system in which downstream workers were paid for their combined output .