Monthly Archives: November 2024

A solution to these factors for potential Pacific coast shrimp farming is to culture a local species

Watercress is traditionally grown in outdoor aquatic systems, but there is increasing interest in its suitability for indoor hydroponic systems, such as in vertical farms . VF utilize hydroponic or aeroponic systems that allow plant stacking in multiple vertical or horizontal layers increasing the effective use of space and other resources, particularly water . Indoor vertical agriculture is well-suited to the production of leafy greens. Their fast growth rate, high harvest index, low photosynthetic energy demand and compact shape make them ideal for indoor farming technologies . VFs have multi-layered indoor crop production space with the use of artificial lights and soilless cultivation systems. With the capacity to control lighting, ventilation, irrigation, nutrient levels, and abiotic stress, VFs offer the potential of high and predictable yields and uniform produce alongside reduced water use and often no pesticide applications whatsoever . The future of indoor food production is likely to include other high-value horticultural crops such as several leafy greens, culinary herbs, strawberries, and flowers. Breeding targets for these crops include short life cycles, low energy demands, improved yield , small root systems, as well novel sensory and nutritional profiles . VF systems are gaining traction for commercial scale cultivation, partly due to their ability to deliver locally-grown food to urban areas, with lower environmental costs and also to deliver food in locations where fresh produce cannot be easily grown . These systems also offer a unique opportunity to tailor crop characteristics to changing consumer preferences by altering environmental conditions such as light quality for example , grow bench where blue light has been used to increase the glucosinolate content of several Brassica species including, pak choi and watercress .

Here we investigate differences in yield, morphology, and glucosinolate content of watercress grown under three different cultivation systems . This research provides foundation information to suggest that high yield watercress crop production is possible in vertical farming systems and that watercress quality may be further enhanced for improved anti-cancer characteristics. We have shown that the quality and yield of the leafy green salad crop watercress can be significantly improved by growth in an indoor vertical hydroponic system, enriched in blue light. The CDC ranked watercress as the most nutrient dense crop based on the content of 17 nutrients that are associated with reducing chronic disease risk . Our results show the yield and nutrient content of watercress can be enhanced even further by utilizing a novel vertical indoor growing environment other than the current commercial system used in the UK. Yield increases may be explained by the ability to tightly control environmental conditions in the VF that generate a consistent optimal nutrient and temperature environment. The increase in glucosinolate content from UK to CA is probably explained by heat stress in CA, with the maximum temperature recorded at the CA site at 43.8 ◦C compared to 30.9 ◦C for the UK. Glucosinolate accumulation is associated with improved heat and drought stress tolerance in Arabidopsis and increases in GLSs are observed in heat-stressed Brassica rapa . Increases observed in GLS content in VF can be explained by prolonged blue light exposure and a longer growth period . The mechanism of different LEDs on GLS biosynthesis regulations still remain unclear, but a short-duration blue light photoperiod increased the total aliphatic GLSs in broccoli . A similar result from a genome wide association mapping of Arabidopsis also revealed that blue light controlled GLS accumulation by altering the PHOT1/PHOT2 blue light receptors .

Increasing blue light in the VF increased total GLSs content and although not statistically significant, it confirms the study by Chen et al. that showed increased GLSs content with increased blue light. Rosa et al. showed that GLS concentrations are more sensitive to the effect of temperature than of photoperiod and this is consistent with our results in total GLSs between the UK and CA sites. Our results support the idea that indoor farm cultivation is effective in promoting health-beneficial chemical properties. Watercress produced PBGLS in both the VF treatments, but this compound was not detected in either the UK or CA trials. PBGLS strengthens the nutrient profile of watercress. PEITC derived from PEGLS has already been proven to be an extremely effective naturally-occurring dietary isothiocyanates against cancer . Inhibitory potency increases several-fold when the glucosinolate alkyl chain gets longer , suggesting that PBITC, with its elongated alkyl chain compared to PEITC, may contribute an additional health benefit to this super food, although this remains to be proven. It is evident that watercress is particularly well-suited for indoor hydroponic growing systems, where plants exhibited the highest yielding leafy growth with improved nutritional profiles, ideal for consumer preferences. Altering the blue:red light ratio may further enhance the anti-cancer properties of this highly nutritious salad crop, but further studies are required to hone the light recipe for indoor cultivation. The premise of this study is that an increasing number of the world’s fisheries are producing or exceeding their maximum yield, while the world demand for seafood increases. Global per capita seafood consumption has increased steadily from 9.9 kg in the 1960s to 19.2 kg per capita in 2012 . This skyrocketing demand in conjunction with population growth and increased fishing efficiency has led to over exploitation of many marine fish stocks. Technological advancements have made accessible areas that were once too remote or too deep to be exploited. Commercial fishing involves deploying hundreds of miles of nets and dragging various apparatus along bottom habitats.

A side effect of this is environmental damage throughout ocean ecosystems, much of which is unobservable and immeasurable . Fishery management authorities have started adopting ecosystem-based management approaches, understanding that fish populations depend upon habitat integrity . Many fisheries stipulate gear restrictions and limited access, but enforcement, efficacy, and consideration of economic and social factors all vary on a case-by-case basis. Despite increased efficiency, fleet size, and access, wild capture fisheries’ annual production has stabilized to 1990 levels, varying up and down about three percent since 1998 . The relative consistency of wild catch over the past two decades, accompanied by the periodic dramatic stock collapse, such as the anchoveta crisis in 1998 and today’s California sardine fishery closure, suggests wild-capture marine food fish production may be at capacity. Yet to date, seafood production has risen to meet demand, outpacing world population growth twofold in annual growth rates since the 2000s. This has been made possible by the aquaculture industry, which has been growing rapidly in the past few decades: aquaculture contributed to 5 percent of seafood production in 1962, and an impressive 49 percent in 2012 . From some perspectives, aquaculture is a means to contribute to global food security while alleviating pressure on wild stocks and preventing environmental damage from impactful fishing gear. But to others, farmed seafood comes with its own variety of health and environmental risks, and is neither an adequate nor sustainable substitute for its wild counterpart.In regards to U.S. seafood consumption, plant nursery benches shrimp is the most consumed product, weighing in at 1.9 kg per year consumed by the average American . Despite this popularity, we remain dependent upon foreign production for upwards of 90% of shrimp products. In 2015 the U.S. imported almost 1.3 billion pounds of shrimp, valued at over $5.4B . The aquaculture industry continues expand, and import data prove shrimp is a top priority for the U.S. However, ecosystem-based assessments of commercial fisheries particularly malign shrimp fisheries. The primary gear type for shrimp fisheries is trawl gear . Certain types of trawls earn the highest rank among fishing gear in terms of physical and biological habitat damage . Also, trawling for small species leads to massive amounts of bycatch: roughly five pounds of non-target species per pound of shrimp in the U.S. fisheries. Some bycatch isretained, but the global average discard rate for all shrimp trawl fisheries is more than 62 percent, over twice the rate of any other fishery . When shrimp farming first became profitable in the 1970s, it was lauded by some as a ‘Blue Revolution’, a way to avoid the environmental havoc described above. However, rapid, unregulated expansion of intensive level fish farms earned farmed seafood a reputation of being unhygienic and environmentally destructive in its own ways. Low survival rates, disease outbreaks, concentrated waste effluent, and undesirable feed ingredients soon disillusioned environmentalist support . Over the decades though, aquaculture technology has evolved considerably, resulting in sustainable feed alternatives, the ability to reduce waste, and produce more efficient, cleaner products overall. At least in countries with effective regulation. The majority of our current imports come from penaeid shrimp farms in India, Indonesia, and Ecuador ; countries with less stringent health and environmental standards than those of the U.S. One way to meet the growing domestic demand for shrimp, as well as ensure environmental integrity, is to produce our own. Marine shrimp aquaculture exists in the United States, but import statistics show that domestic products constitute a negligible amount of our annual consumption. Researchers in the 1970s looked into various shrimp species for farming along the Pacific coast, but studies were abandoned as it proved far cheaper at the time to get products from abroad and shrimp farming became dominated by warm water species .

Currently, people are becoming more cognizant of the origins and environmental impacts of their food. A locally-farmed shrimp could reduce the environmental footprint of long-distance imports; provide a fresher product to the consumer; and reduce ecosystem damage resulting from farming and fishing practices in unregulated regions. Major concerns and opposition regarding fish and shellfish farming include the risk of escape and subsequent introduction of an invasive species or pathogens. The spot prawn is native to the North Pacific and to this point has never been utilized as a commercial aquaculture species. There is an active wild capture fishery for spot prawn in California, Washington, Oregon, Alaska, and Canada. The California fishery is most active between Santa Cruz and San Diego, averaging 250,000 pounds per year. Only pots are used, as trawling for spot prawn is prohibited in all state waters. The fishery is regarded as relatively sustainable due to its small, limited access , closure during peak spawning months, and the ban of trawling . However, California spot prawn earns only a “good alternative” score from Monterey Bay Aquarium’s Seafood Watch due to potential damage to seafloor habitats caused by the traps . Furthermore, no surveys are conducted to estimate or monitor population abundance, and the bycatch to target ratio was only monitored during the 2000-2001 season where it was found to be 1:1 in the south and 2:1 in the north . Stable catch, limited access, and gear restrictions may indicate a well-managed fishery, but in reality, much of the spot prawn population health is unknown. Live spot prawns can reach $24 per pound ex-vessel price and $30 per pound at markets due to their large size – sometimes six shrimp to a pound. In Japanese restaurants the large, cold-water shrimp is known as amaebi , a high-end sushi item. Stateside Asian marketplaces are the primary consumers for California spot prawn, while the bigger fisheries in Alaska and British Columbia export a significant percentage of their landings to Japan or global sushi markets . Farming P. platycerosis not a call to derail the wild-capture fishery, but a suggestion that supplementing this seasonal fishery with a farmed option may be a prudent way to support local industry and avoid increasing ecosystem stress or competition on the water. In 1970, Price and Chew of the University of Washington Fisheries Research Institute undertook the first laboratory rearing of P. platyceros. Until this study, the only descriptions of larval stages were drawn from plankton samples in the 1930s. The culmination of their study is the definitive morphological guide to spot prawn development through stage IX. Price & Chew caught ovigerous females in Washington and reared larvae from the females and from loose eggs that had detached during transport. Loose eggs were kept suspended on a screen in a unique recirculating system with 10µ-filtered, aerated, UV-sterilized saltwater. In this setting, eggs could last up to sixty days with no fungus growth. There is no comment as to when the detached eggs hatched in relation to the eggs carried by females, but both hatched successfully. It took females 7-10 days to release all of their progeny once hatching began.

The location of the pivot point at half of the bucket height also reduces the amount of weight the users handle

Therefore, when the term “children” is used in the context of agricultural labor, the implication is that the parents oversee the welfare and legal circumstances related to this working yet “dependent” population. “Youths” and “adolescents” are usually used interchangeably, and according to the World Health Organization , refer to the period of transition from childhood to adulthood, commonly between the ages of 10 and 19 . Hence, in this paper, as in most of the related literature, the terms children, youths and adolescents will be used interchangeably, and will refer to the age range of 10–17, unless otherwise specified; whereas, the term “adult” will refer to the age of 18 and above. Regardless of the hazardous nature of farm-related activities, in agriculture, age does not limit participation and children may do the work of an adult . Youths 13 to 15 years old are usually expected to do pretty much what adults can do . Additionally, the youth-involved farm-related activities are highly correspondent to geographical regions and commodity types . The same study indicates that, in the U.S., Midwestern youths are primarily assigned to animal care and farm maintenance jobs , compared to Western youths, whose tasks are mainly crop management . Every year in the U.S., approximately 126,000 hired youth farm workers aged 14–17 years of age are employed in crop agriculture . These hired youth farm workers, especially the ones who work on non-family owned farms, commercial plant racks are generally involved in harvesting/picking tasks . Studies have documented harvesting tasks as being associated with potential LBD risk factors and back pain reports . In addition, All read and colleagues have investigated the magnitude of LBD risk that youth are exposed to while performing tasks that are routinely performed by Midwestern farm youths.

They have quantitatively measured the trunk kinematics of these youth workers as well as workplace factors while the workers are performing 41 manual materials handling tasks and found that the associated LBD risks of some tasks were comparable to that of industrial jobs with high LBD risks. Out of the 41 evaluated tasks, seven tasks are placed in the high LBD risk category, corresponding to the LBD risks found in industrial jobs, and 24 tasks are placed in the middle risk category. Work-related injuries among youth workers in agricultural settings are a serious problem. The estimates of annual farm-related non-fatal injuries range from 1,700 to 1,800 per hundred thousand child farm residents . Youths holding farm jobs simultaneously with non-farm jobs have a significantly higher proportion of injuries, of which sprain and strain are some of the most common types . In addition, muscle aches and strains of the back, shoulder, and other joints, are described as everyday occurrences among youths working on farms . A study focusing on youth workers in Wisconsin fresh markets also revealed that over half of the youth workers reported experiencing low back discomfort, while 25% reported disabling discomfort . Intervention studies with the specific aim of reducing LBD risk factors associated with tasks performed by working farm youths have been somehow limited in the literature, with few notable exceptions . Hence, the purpose of this study is to introduce and evaluate two interventions for bucket handling on farms. The two interventions and their development are firstly presented, followed by two evaluation phases; “Phase 1” is an intervention evaluation with adult volunteers, and “Phase 2” consisted of a confirmatory evaluation with youth volunteers from a local high school. The evaluation approach focused on the effectiveness of these two interventions in reducing LBD risk during the lifting, carrying, and dumping of water buckets.

Subjective responses were also obtained during the two testing phases. The job of handling water/feed buckets entails three main tasks: 1) lifting the bucket, 2) carrying the bucket to the destination and 3) dumping the content of the bucket at the destination . This job is commonly performed by youths on farms , where they transport water or feed from a source, such as water pump or a barn, to animal feeding containers. The objective of this phase of the study was to develop tools that are expected to reduce LBD related risk factors foryouths performing manual handling of water/feed buckets. The design approach was to develop two tools; one to simultaneously address the carrying and dumping tasks, and another to address the lifting task. In setting the tools design criteria, the research team relied on existing agricultural, bio-mechanical and ergonomic literature and guidelines, and consulted with colleagues and designers who are themselves farmers and grew up and worked on farms during their youth, and were familiar with the requirements and conditions surrounding bucket handling on farms. Kepner-Tregoe decision analysis was performed to select the design of the major components for each task: the lifting aid, the carrying device, and the dumping mechanism . Constraints and criteria used in the analysis are described. Based on the brainstormed design ideas, prototypes were built for testing purposes. Prototype testing results were then applied in the KT analysis for design comparisons and selection. The analysis results are presented in the following section for each of the three tasks. Carrying—A wheeled design was chosen for the carrying method. The main decision was the number of wheels employed in the design. The type and size of the wheels were determined based on other criteria and are discussed below. Three options– two-wheel, three-wheel, and four-wheel– were evaluated and compared using the KT analysis.

Two-wheel design- In order to use a two-wheel design, the user has to bear partial weight of the handled object . This design approach is used extensively on farms. Three-wheel design- A three wheel design can support the full weight of object handled. Also, it allows the user to control the cart by maneuvering the pushing handles . However, during field testing, this prototype design tended to commonly tip over on dirt roads. Four-wheel design- This design can also support the full weight of the handled object. Additional handles are required at the rear side of the cart, since the rear wheels prevent the user from reaching the handles in the front . In addition, the design exhibited greater stability on dirt roads than the three-wheel design. The results of the KT Analysis are shown in Table 1. A four-wheel design met all the design constraints and was pursued for in the final design. Dumping—Three dumping mechanisms were analyzed and compared using KT analysis. Table 2 summarized the analysis results. Type A- While the bucket is hanging in a frame that has a pivot point fixed to the carrier, users can dump by tilting the frame. The activation force is greatly reduced because of the difference in the acting moment arms . The moment arm for the activation force is adjusted to be three times the moment arm associated with bucket weight , ebb and flow tray with MA1 equals to approximately the radius of a five-gallon bucket . Type B- An air cylinder aided tilting mechanism that utilizes an air actuator to reduce the force required for the dumping process. An electric-powered compressor is required as the source of compressed air. Type C- Rails on both sides of the bucket with wheels mounted on its sides were considered. Dumping is completed by sliding the bucket down along the rails. Comparing the three mechanism types, Type A was selected as the option fulfilling all the desired design constraints and most of the design criteria . Lifting—The basic design for the lifting aid, a rod with a handle on one side and a hook at the other, allows the users to reach the handle of the bucket without bending down. However, the selection decision depended on the lifting mechanism, for which three types of mechanisms were compared . Table 3 shows the KT analysis results for this comparison.

Based on these results, the two-handed operation was selected over the other two mechanisms. Design Specifications and Modifications—Based on all the KT analyses presented above for each task, a carrier with four wheels was deemed best for the carrying task; a tilting mechanism which users operate at the same position as they push the cart was best for the dumping task; and a two-handed tool was best for the lifting task. For carrying and dumping tasks, an intervention, namely Ergonomic Bucket Carrier , was developed. For the lifting tasks, another intervention, called Easy Lift , was constructed. The dimensions of the prototypes of the EBC and EL were based on the anthropometric data of youths between ages 12.5 and 17.5 and environmental factors associated with dumping/carrying/lifting of water/feed buckets . The dimensions of several parts of the EBC, including the height of the pushing handle and the height of the bucket stand, were made adjustable to match the users’ anthropometrics and needs. Ergo Bucket Carrier —The final design of the EBC is shown in Figures 6 and 7. To use the device, the user first loads the bucket to the EBC, pushes the device to the destination, and then activates the dumping device by pushing a handle . The wheels were selected based on commercial availability, outdoor road conditions, and price. Pneumatic wheels that are less than 15 inches in diameter were selected to meet outdoor road conditions and to provide close contact with the destined container. The wheels selected for the front-end were fixed, pneumatic wheels with a 14 inch diameter; whereas, for the rear-end they were swivel, pneumatic wheels with a 10 inch diameter. Minor changes were made to the original designs to improve usability and performance. For example, the position of the handle for activating the dumping mechanism was changed from vertical to horizontal so that the users could activate the dumping mechanism at the same position as they were in to push the EBC, which would facilitate the efficiency of the process . The length of the handle remained unchanged and so did the moment arms and related forces. In addition, the positions of the cart handles were altered with an outward angle so that users’ wrists could remain in neutral position . Easy Lift —The final design of the EL is shown in Figure 8. A power grip design is used on the grip handle. The power grip was an angled handle that kept the users’ wrist in a neutral position so that the user can utilize her/his maximum power grip’s strength . The spinal loads associated with the use of EL are also expected to be less compared to manual lifting of the bucket due to the anticipated reduction in forward flexion and spinal moment arms . During lifting and carrying, the user hooks the EL’s U-shaped hook to the bucket’s handle to lift and carry the bucket . For dumping, the user sets the bucket on the floor, rotates the EL to hook a long screw to the bottom of the bucket, and uses the bucket’s handle and EL to lift and dump the bucket into the container .The results of this study showed that the developed interventions could be effective in reducing the overall LBD risk for the bucket handling job; however, the tools were different in their effectiveness in risk reduction among the three tasks. The overall LBD risk for the Manual job was reduced from 58% to around 52% for the EBC and 50% for the EL. This seemingly modest reduction is due to the fact that the overall LBD risk for the “job” is based on the maximum risk factors values observed among all three sub tasks. An approach that incorporates the advantages of each of the two introduced interventions would yield more substantial reduction in the overall LBD risk. Therefore, it is recommended that for lifting the bucket , the EL should be used; whereas, the EBC should be used during the carrying and dumping tasks. This combined approach is expected to provide maximum reduction in the overall LBD risk, since it capitalizes on the strength of each tool in reducing the risk factors within the sub tasks . This approach is expected to be especially effective if the job requires the bucket to be carried over long distances .

Explicit formulations use the data defined in the point cloud to define linear approximations to the SDF

The explicit formulation by Hicken and Kaur uses all points in the point cloud to define the implicit function and shows favorable decay in surface reconstruction error as the number of points in the point cloud NΓ increases. This structure has been used in combination with RBFs for hole-filling in [37] and anisotropic basis functions for representing sharp corners in [40]. Another approach is to construct a uniform grid of points to control the implicit function. Unlike the aforementioned approaches, the distribution of points is decoupled from the resolution of the point cloud. As a result, deformations to the geometric shape can be represented without loss in accuracy near the surface as shown by Zhao et al.. This makes it a popular structure in partial differential equation based reconstruction methods that evolve the surface during reconstruction, such as in [47, 48]. In general, more points representing the implicit function are required to achieve the same level of accuracy to other approaches. As a result, implicit functions defined by a uniform grid are more computationally expensive to solve for in both time and memory usage than the aforementioned approaches, as experienced by Sibley and Taubin, but can be reduced by a GPU-based multi-grid approach as implemented by Jakobsen et al.. The signed distance function presents an ideal candidate for implicit surface reconstruction and geometric non-interference constraints. It is known that the zero level set of the SDF is a smooth representation of the points in a point cloud, indoor grow light shelves and its gradient field is a smooth representation of the normal vector field from the normal vectors in a point cloud. As a result, many formulations to approximate the SDF have been researched for implicit surface reconstruction.

We note that there exists other methodologies, such as wavelets and a Fast Fourier Transform based method, that fit a smooth indicator function instead, but are less applicable for non-interfernce constraints where a measurement of distance is desired. We identify four categories that approximate the SDF in some way: explicit formulations, interpolation formulations with RBFs, PDE-based formulations, and energy minimization formulations. These formulations then apply smoothing to these linear approximations in order to define the level set function. Risco et al. present the simplest approach which uses the nearest edge and normal vector to define the function explicitly. The resultant constraint function is piece wise continuous but non-differentiable at points where the nearest edge switches. Belyaev et al. derive a special smoothing method for defining signed Lp-distance functions, which is a continuous and smooth transition between piece wise functions. Hicken and Kaur use modified constraint aggregation methods to define the function in a smooth and differentiable way. Upon the investigation of Hicken and Kaur, the signed Lp-distance functions give poor approximations of the surface. Additionally, Hicken and Kaur’s formulation is shown to increase in accuracy as the data in the point cloud, number of points NΓ, increases. We identify Hicken and Kaur’s explicit formulation as a good candidate for enforcing non-interference constraints,as it is continuous and differentiable with good accuracy. Another method to construct the level set function is to solve an interpolation problem given an oriented point cloud P. Because the data points of P always lie on the zero contour, nonzero interpolation points for the implicit function can be defined on the interior and exterior, as originally done by Turk and O’Brien. Radial basis functions are then formulated to interpolate the data. To avoid overfitting, thinplate splines can be used to formulate the smoothest interpolator for the data, as noted in [37, 45].

Solving for the weights of a RBF involves solving a linear system, which is often dense and very computationally expensive due to their global support. Turk and O’Brien solve up to 3,000 RBF centers, and improvements by Carr et al. allow up to 594,000 RBF centers to be constructed in reasonable time . On top of the significant computational expense, interpolating RBFs have been criticized for having blobby reconstructions which poorly represent sharp features in the geometric shapes.The vector field is then integrated and fit, usually by a least squares fitting, to make the zero level set fit the point cloud. We classify the methods that solve for the vector field as a solution to a partial differential equations as PDE-based methods. Poisson’s method uses variational techniques to Poisson’s equation to construct a vector field. Improvements to this method add penalization weights to better fit the zero contour to the point cloud in [54]. Tasdizen et al. prioritize minimal curvature and minimal error in the vector field by solving a set of coupled second order PDEs to derive their level set function. Zhao et al. use the level set method, originally introduced by Osher and Sethian, for surface reconstruction, with the advantage of modeling deformable shapes. In the aformentioned PDE-based methods, the setup for the implicit function reduces to solving a PDE by time-stepping or a sparse linear system in the case of Poisson’s equation.In the analysis done by Calakli and Taubin, they found that Poisson’s method often over-smooths some surfaces. We also note that solutions to PDEs are more difficult to implement than other methods in practice. Aquaculture is an important contributor to the Irish economy, producing products to the value of e167 million in 2016, including e105 million from farmed Atlantic salmon . The industry is particularly important along the western seaboard of Ireland.

Most Irish salmon farming is certified organic . Salmon farming in Ireland is associated with an intricate network of fish movements within and between the different types of salmon farms. There are three different farm types, including brood stock, freshwater, and seawater farms. In earlier work , social network analysis was used in combination with spatial epidemiological methods to characterize the network structure of live farmed salmonid movements in Ireland. It was demonstrated that characteristics of the network of live salmonid fish movements in Ireland would facilitate infection spread processes. These included a power-law degree distribution [that is, “scale free”], short average path length and high clustering coefficients [that is, “small world”], with the presence of farms that could potentially act as super-spreaders or super-receivers of infection, with few intermediaries of fish movement between farms, where infectious agents could easily spread, provided no effective barriers are placed within these farms . A small proportion of sites play a central role in the trade of live fish in the country. Similarly, we demonstrated that highly central farms are more likely to have a number of different diseases affecting the farm during a year, diminishing the effectiveness of in-farm bio-security measures , and that this effect might be explained by an increased chance of new pathogens entering into the farm environment . This is a very important area of research in aquaculture, especially considering that the spread of infection via fish movement is considered one of the main routes of transmission . Mathematical models and computer simulations offer the potential to study the spread of infectious diseases and to critically evaluate different intervention strategies . Through access to real fish movement data, these models can be programmed to incorporate both the time-varying contact network and data-driven population demographics. However, there are considerable computational challenges when stochastic simulations are conducted using livestock data, rolling tables both computationally, including the need for efficient algorithms, and also with model selection and parameter inference . An efficient modeling framework for event-based epidemiological simulations of infectious diseases has recently been developed , including the use of a framework that integrates within farm infection dynamics as continuous-time Markov chains and livestock data as scheduled events. This approach was recently used to model the spread of Verotoxigenic Escherichia coli O157:H7 in Swedish cattle . Cardiomyopathy syndrome is a severe cardiac disease of Atlantic salmon. It was first reported in the mid-1980s in farmed salmon in Norway and later detected in several other European countries, including the Faroe Islands , Scotland and, in 2012, in Ireland . CMS generally presents as a chronic disease, leading to long-lasting, low-level mortality, although some individuals experience sudden death. At times, however, CMS can present as an acute, dramatic increase in mortality associated with stress . A recent Norwegian study has identified risk factors for developing clinical CMS, including stocking time, time at sea, a previous outbreak of pancreatic disease or Heart and Skeletal Muscle Inflammation , and hatchery of origin . The economic impact of CMS is particularly serious as it occurs late in the life cycle, primarily during the second year at sea, by which time the incurred expenditure is high. No effective preventive measures are known, and there is no treatment available . In 2009, CMS was identified as a transmissible disease , and has been linked, in 2010 and 2011, to a virus resembling viruses of the Totiviridae family .

The discovery of this virus, piscine myocarditis virus , has contributed to increased knowledge about the disease including the development of new diagnostic, research and monitoring tools . The agent is spread horizontally, between farms at sea, although there is some indication of a possible vertical transmission pathway . Recent Norwegian research has shown that PMCV is relatively widespread, including in geographic regions and fish groups without any evidence of CMS . The mechanisms leading to progression from PMCV infection to CMS are currently unclear . CMS is present in Ireland. The first recorded outbreak of CMS occurred in 2012, associated with low-level mortalities over a period of 4–5 weeks followed by increased mortalities during bath treatment for sea lice . CMS is not a notifiable disease in Ireland, and there are no systematic records of its occurrence. Nonetheless, anecdotal information from field veterinarians and farmers suggest that CMS occurrence has steadily increased over the years. A retrospective study was recently conducted, using real-time RT-PCR with archived broodstock samples dating back to 2006, which suggests that PMCV may have been introduced into Ireland in two different waves, both from the southern part of the range for PMCV in Norway . PMCV was found to be largely homogenous in Irish samples, with limited genetic diversity. Further, the majority of PMCV strains had been sequenced from fish that were not exhibiting any clinical signs of CMS, which suggests possible changes in agent virulent and/or the development of immunity in Irish farmed Atlantic salmon . This paper describes the use of data-driven network modeling as a framework to evaluate the transmission of PMCV in the Irish farmed Atlantic salmon population and the impact of targeted intervention strategies. This approach can be used to inform control policies for PMCV in Ireland, as well as other infectious diseases in the future. Model parameters were estimated from a previous study, which had been conducted in 2016 and 2017 to determine the prevalence of PMCV infection in Irish salmon farms by real-time RT-PCR . The sampling strategy was replicated to ascertain the status that could have been found if simulated farms had been sampled. In this study, sample collection was conducted on 22 farms from 30 May 2016 to 19 December 2017. A ranching farm is a freshwater broodstock farm that releases juvenile fish to the environment for conservation purposes. Some farms were sampled more than once over the course of the study, with the median samplings per farm in this group being 3.5 . A total of 1,201 fish were sampled during the study. Samples consisted of heart tissue across all fish age classes and ova. In this study, PMCV was detected at a low level in most sites, with only one clinical case of CMS occurring during the study period. We simulated sampling at each time point by randomly sampling fish within each farm and age category, as in the observed data set, from the number of susceptible and infected individuals at the time for the sampling point in the simulated farms. The aforementioned observational study also looked for PMCV in archived samples of Atlantic salmon broodstock from 2006 to 2016, seeking to determine whether the agent had been present in the country prior to the first case report in 2012 . For this, archived samples of broodstock Atlantic salmon were tested for each year from 2006 through 2016, using 60 archived pools per year.

The vertical entry model builds on the large and growing empirical literature on static market entry

The section also discusses the possible motives for, and effects of, vertical integration in this industry. Section 3.4 presents the econometric specification, followed by a description of the estimation strategy in Section 3.5. After describing the data in Section 3.6, I present the estimation results, including the findings from policy simulation, in Section 3.7. A concluding section follows.Vertical integration has three main types of efficiency effects. The first one is the elimination of double marginalization . Double marginalization occurs when oligopolistic markups are charged in both the upstream and downstream segments. By eliminating the markup in the upstream segment, a vertically integrated firm enjoys a cost advantage over its unintegrated downstream rivals. Chipty finds evidence from the cable TV industry that is consistent with the elimination of double marginalization by vertically integrated firms. The second type of efficiency effect arises from the ability of vertically integrated firms to carry out higher levels of non-contractible relation-specific investments . There is an abundance of empirical research – recent examples of which include Woodruff and Ciliberto – indicating the existence of such investment-facilitation effects. The third type of efficiency effect relates to the ability of vertically integrated firms to secure the supply of an intermediate good or more generally, vertical growing racks to improve coordination in logistics. Theoretical models that explore this aspect of vertical integration – namely, Carlton and Bolton and Whinston – find that the overall effect on market outcomes is indeterminate. Meanwhile, Hortac¸su and Syverson’s empirical analysis of the cement and ready-made concrete industries finds that vertical integration motivated by logistical concerns has a price-lowering effect.

One issue that has not been addressed in the literature is the possibility that the positive efficiency effects of vertical integration may spill over to other firms that are not vertically integrated. Such efficiency spillovers would reinforce the price-lowering or quality-enhancing effects of vertical integration. Foreclosure typically occurs when a vertically integrated firm restricts supply of the intermediate good with an aim to raise the final good price. The significance of such practices has been the subject of continuing debate; Riordan summarizes the notable theoretical models that have shaped the discussion. The models are roughly divided into two groups: models where vertical integration raises downstream rivals’ costs by dampening competition in the upstream market , and models where vertical integration allows upstream units to restrict the supply of the intermediate good and restore monopoly power . Most of the empirical analysis on vertical foreclosure looks directly at the effect of vertical integration on market outcomes. A general conclusion from this literature is that the effect of vertical integration varies across industries; higher final good prices due to vertical integration is found in some industries but not in others. This may be because foreclosure effects exist only in certain industries. It is also possible that in many industries, any foreclosure effects that do exist are offset by the efficiency effects of vertical integration. Useful experimental evidence on vertical foreclosure exists. Normann finds that vertically integrated players often employ strategies that raise their rivals’ costs, as in Ordover et al. . Similarly, Martin et al. demonstrate that the monopoly restoration model of Rey and Tirole is partially supported by experimental data. Thus, the experimental literature provides support for vertical foreclosure theory, not least because efficiency effects are absent by design. Rosengren and Meehan and Snyder look at the effect of vertical integration on rival profits to make inferences about vertical foreclosure.

Both papers focus on the effect of a vertical merger announcement on the stock prices of unintegrated rivals. Rosengren and Meehan do not find that vertical mergers have a significant effect on independent downstream rivals. Thus, they find no support for foreclosure theory. Christopher Snyder’s study of the British beer industry, described in Snyder , finds that an independent upstream brewery was harmed by vertical integration between rival breweries and downstream pubs. He interprets this as support for foreclosure theory. A common feature of the existing empirical work on foreclosure is that they assume exogenous changes in market structure. A defining feature of recent studies such as Hastings and Gilbert and Suzuki has been to design dataset construction and estimation methods so that the exogeneity assumption can be made plausible. Hart and Tirole’s theoretical paper contains some analysis on the effect of vertical integration on market structure formation. Essentially, the changes in profits brought about by vertical integration may induce unintegrated firms to become integrated themselves or to exit the market. Ordover et al. also investigates the possibility that vertical integration by one firm may lead another to become vertically integrated. The possibility that vertical integration can affect the market structure formation process – in other words, that vertical integration exhibits “market structure effects” – is an area that has only recently begun to receive attention from empirical economists. The leading example is Hortac¸su and Syverson . They find that in the cement and ready-made concrete industries, unintegrated upstream firms had higher exit probabilities in markets where a higher proportion of entrants were vertically integrated. This was apparently caused by higher productivity levels among vertically integrated firms. In other words, the efficiency effects of vertical integration may have led unintegrated upstream firms to exit.

The vertical entry model presented in this chapter is designed to estimate the effect of rival actions, including vertical integration, on firm payoffs. For instance, the estimated parameters can be used to calculate how an unintegrated firm’s payoff changes when a rival pair consisting of one upstream firm and one downstream firm is replaced by a vertically integrated one. In this sense, the model is closest in spirit to the event studies of Rosengren and Meehan and Snyder that look at the effect of rival vertical integration on firm value. The weakness of the Rosengren and Meehan and Snyder studies – that the impact of vertical integration on market outcomes is not directly observed – is thus shared by the current model. A major concern is that foreclosure effects and efficiency effects often affect unintegrated firms’ profits in the same manner, and thus tend to be indistinguishable . For example, if an unintegrated downstream firm’s profit decreases as a result of rival vertical integration, it could be due to a foreclosure effect, an efficiency effect, or both. Therefore, even if a significant payoff impact is found, one may not be able to conclude anything about the existence of either of these effects. The advantage of my model is that different types of payoffs can be observed. For a few of the payoff functions, the direction of foreclosure effects is different from that of efficiency effects, so that one is distinguishable from the other. For example, if we find that unintegrated upstream profits increase in response to rival vertical integration, the existence of foreclosure effects is implied. This is because efficiency effects can only have a negative impact on an unintegrated upstream firm’s payoff. Similarly, if unintegrated downstream profits increases in response to vertical integration, it must be due to the positive spillover of efficiency effects, because any foreclosure effect would affect unintegrated downstream profits negatively. Another characteristic of the vertical entry model is that, unlike in existing studies such as Hastings and Gilbert and Suzuki , one need not assume that firms’ vertical integration decisions are exogenous. In fact, entire market structures, including the vertical integration status of individual firms, are modeled as endogenous outcomes. This implies that the data requirements for the current model may, in some sense, not be as demanding as that of existing methods. There is, however, a rather stringent requirement that the dataset contain observations from multiple markets where complete vertical market structures are observed. An additional strength of the vertical entry model lies in its ability to examine how vertical integration influences market structure formation. In addition to asking what happens to an unintegrated firm’s payoff when a rival pair becomes integrated, vertical grow room design one can ask whether the payoff impact is so large that the firm’s entry decision changes. In this connection, a useful application of the model is to evaluate the effect of a policy that bans vertically integrated entry. How does such a ban affect the number of entrants in the upstream and downstream segments? While the answer is not clear a priori, the model and parameter estimates can be used to obtain one by simulation. This field has been motivated by the technical challenge of how to handle the number of rival entrants – a variable that is clearly endogenous – as a key argument of the firm’s payoff function. The earliest studies are Bresnahan and Reiss and Berry .

Building on the pioneering work of Bjorn and Vuong , their econometric models explicitly allow market structure outcomes to be equilibria of entry games. For example, Berry’s model contains an equilibrium finding algorithm that is run at each iteration of the parameter search. These early papers focus exclusively on horizontal competition among firms that produce a homogeneous good. Coefficients on the number-of-rivals variables represent rival effects; from them, information on the degree of competitiveness in the market can be inferred. More recent papers such as Mazzeo , Seim , and Orhun expand the entry model framework to allow for product and spatial differentiation. For instance, in Mazzeo’s study of motel markets, potential entrants choose between entering the low-quality segment or the high-quality one. His results provide insight not only into the degree of competition within and across different market segments, but also into the process of market structure formation. For instance, the estimated parameters are used to predict how the product-differentiated market structure changes in response to increases in population and traffic. Another group of papers uses the entry model framework to investigate the existence of complementarities in firm actions . For example, Vitorino finds that the existence of agglomeration effects allows stores to profit from co-locating inside shopping centers. The present model examines the formation of vertical market structures in which suppliers and buyers trade and compete. As in the papers on horizontal entry, the estimates provide information on the degree of competition within each vertical segment. In addition, the complementarity between upstream entry and downstream entry can be examined. Finally, and most interestingly, the model should provide evidence on the competitive role of vertically integrated firms. Do vertically integrated firms hurt upstream rivals more than they harm downstream ones? Can some firms benefit from facing a vertically integrated competitor instead of an unintegrated pair of firms? Such questions are empirical in nature, and answering them is the subject of this chapter.This section describes the process of vertical market structure formation in the generic pharmaceutical industry to motivate the econometric model. As described in Chapter 2, drug markets open up to generic competition when patents and data exclusivities that cover the drug expire. In each market, generic drug manufacturers make entry decisions a few years before the market opening date. If an upstream unit decides to enter, it develops the active pharmaceutical ingredient and submits a dossier, called the Drug Master File , to the Food and Drug Administration . A downstream entrant, on the other hand, procures the API – either from an outside supplier or from its own production – and develops the finished formulation. It then conducts bio-equivalence tests using the finished product and files an Abbreviated New Drug Application to the FDA. Two peculiar aspects of the generic entry process need to be addressed before providing a stylized description. The first is the possibility of patent challenge by generic entrants. As described at length in Chapter 2, the regulatory rules governing generic entry incentivize generic entrants to challenge the ability of originator patents to block entry, by way of a 180-day generic exclusivity awarded to the first-to-file paragraph IV ANDA applicant. The existence of such incentives pushes firms into a race to be first whenever a paragraph IV patent challenge is involved. The economics of such a race is very different from that of a conventional entry game where firms move simultaneously. For this reason, this chapter focuses only on markets that are not subject to a paragraph IV patent challenge.

Suppose also that the patent is the only one protecting a particular drug market

Then, the act of invalidation benefits not only the generic firm who made the investment, but also others who seek to enter the market. Because such public goods tend to be under supplied in a competitive market, Congress created a system to reward the first generic firm to invest in a patent challenge. The reward is given out through a complex process that I summarize here. When a generic firm files an ANDA containing a paragraph IV certification to the FDA, it must directly notify the originator , as well as the other holders of the patents being challenged, about its filing. The originator must then decide within 45 days whether or not to initiate a patent infringement suit. If the originator decides not to sue, then the FDA is allowed to approve the ANDA and the generic may enter the market. If the generic firm is the first to have filed a substantially complete ANDA containing a paragraph IV certification, it is awarded a 180-day exclusivity in the generic market. This means that the FDA is not allowed to approve any other ANDA until 180 days have passed since the first generic product’s commercial launch. If the originator decides to sue the generic entrant, then the FDA is stayed from giving final approval to the ANDA until 30 months have passed or until a court decides that the patent in question is invalid or not infringed, whichever comes sooner. The FDA may review the ANDA in the mean time, indoor vertical garden systems but it can only issue a tentative approval. Thus, the 30-month stay functions as an automatic preliminary injunction against the paragraph IV ANDA applicant. The main possible outcomes of the patent infringement suit between the originator and the paragraph IV applicant are the following: a victory for the generic entrant, a loss for the generic entrant, or a settlement between the two parties.

If the generic applicant wins the patent infringement suit, its ANDA receives final approval from the FDA once the other patents listed in the Orange Book expire. If the generic firm is the first to have filed a substantially complete paragraph IV ANDA, it obtains the right to 180-day exclusivity. The exclusivity period starts when the first to-file generic begins commercial marketing or when a court decides that the patent in question is invalid or not infringed, whichever is earlier. If the generic firm loses the infringement suit for every challenged patent, then its ANDA is not approved until expiration of those patents or until the end of the 30-month stay. Even if the firm is the first-to-file paragraph IV applicant, it is not awarded the 180-day exclusivity, because the right to exclusivity disappears with the expiration of the challenged patents . If the generic and originator firms decide to settle the patent infringement suit, the generic firm’s ANDA is approved only after the 30-month stay. If the generic firm is the first-to-file paragraph IV applicant, it becomes eligible for 180-day exclusivity, which is triggered by the generic product’s commercial launch. The right to 180-day exclusivity is given only to the first-to-file paragraph IV applicant. If the first-to-file applicant loses in patent infringement litigation or otherwise forfeits its right to 180- day exclusivity, the right disappears; it is not rolled over to the next-in-line applicant . If multiple firms file ANDAs with paragraph IV certifications on the same day, and no prior ANDA has been filed, the right to generic exclusivity is shared between those firms. The large profits available from 180-day exclusivities have made generic firms more aggressive in their patent challenges. As Grabowski and Higgins and Graham note, the number of ANDAs containing paragraph IV certifications increased rapidly after the regulatory change: the average number of paragraph IV ANDA filings per year rose from thirteen during 1992-2000 to 94 in the 2001-2008 period.

While this increase partly reflects the greater number of blockbuster drugs going generic in the latter period, observers agree that the regulatory change played a significant role . Table 2.1 presents the share of generic markets that were the subject of one or more paragraph IV ANDA filings in a sample of 128 markets that opened up during 1993-2005. As described more fully in Section 2.5, drug markets were selected for inclusion using the following criteria: the drug product contains only one API; of the set of finished formulations containing the same API, the product is the first to experience generic entry; and there is at least one generic entrant in the market. The propensity of paragraph IV challenges suddenly jumps for markets that experienced first generic entry in 1999. This reflects expectations among generic firms that the FDA would give out more 180-day exclusivities following the 1998 court decisions. The share of generic markets with paragraph IV certifications remains high – at around one-half – in the subsequent years. Grabowski comments that the granting of more 180-day exclusivities has, in some cases, turned the generic entry process into a race to be first. Higgins and Graham note that as a result of more aggressive efforts by generic entrants, ANDA filings have come to take place earlier in a drug’s lifecycle. Indeed, there have been many markets where multiple generic firms filed their paragraph IV ANDAs exactly four years after the approval of the originator’s NDA – that is, on the earliest date allowed by the FDA . Also, Grabowski and Kyle show that drug markets with higher revenue tend to experience generic entry sooner, partly because they tend to be more heavily targeted for paragraph IV challenges. Interestingly, while ANDAs filings are being made increasingly early, Grabowski and Kyle find no evidence that generic product launches are occurring earlier in the drug’s life cycle in markets that opened up more recently.

This may be because the Hatch-Waxman system has had an unintended side effect. As reported by the Federal Trade Commission and Bulow , the system has been used by some originators, somewhat paradoxically, to delay generic entry through the use of so-called “pay-to-delay” settlements. Given that the existence of a patent challenge turns the generic entry process into a race to be first, econometric analysis of generic firm behavior would ideally be based on a model that takes the timing of entry into account. Unfortunately, the data that I use do not contain accurate information on the timing of entry by each generic firm. Also, I do not observe whether or not each ANDA filing contains a paragraph IV certification because this information is not disclosed by the FDA. On the other hand, the FDA publishes a list of drug markets that were the subject of one or more ANDAs containing a paragraph IV certification. Therefore, it is possible to distinguish between paragraph IV markets and non-paragraph IV markets, and to see if firm behavior differs across the two groups. Our interest in this study is in seeing if paragraph IV patent challenges are associated with generic firms’ vertical integration decisions. How might such an association arise? As I argue in Section 2.3, when generic entry involves a race to be first, investments made by upstream API manufacturers tend to become specific to a particular downstream buyer. If contracts between unintegrated upstream suppliers and downstream buyers are incomplete and payoffs are determined through ex post bargaining, plant drying rack this increase in relationship specificity could enhance the role of vertical integration as a way to facilitate investments. In the empirical analysis, I examine whether the occurrence of paragraph IV certification at the market level is associated with higher incidence of vertical integration at the firm level.Before turning to the formal analysis, let us examine the pattern of vertical integration in the generics industry. Figure 2.1 shows how the prevalence of vertical integration at the market level has changed over time. It is based on the sample of 128 markets that opened up between 1993 and 2005. It can be seen that the average number of downstream entrants per market has remained stable at around five. On the other hand, the share of those downstream entrants that are vertically integrated has increased over time. For markets that opened up in the 1993-2000 period, the average share of vertically integrated entrants, as a percentage of the number of downstream entrants, was 8.1 percent. In 2001-2005, the figure rose to 24.1 percent and the difference between the sub-periods is highly significant . The incidence of vertical integration has similarly risen over time. In each of the years from 1993 to 2000, 24.0 percent of the sample markets opening up each year, on average, had one or more vertically integrated entrants. For the years 2001-2005, the average share of markets having any vertically integrated entry was 64.6 percent . An interesting fact about the US generic pharmaceutical industry is that it started off as being vertically separated. When the industry began its growth in the 1980s, finished formulation manufacturers procured most of their API requirements from outside suppliers located in Italy, Israel, and other foreign countries.

This was mainly due to differences in patent protection across countries: while strong patent protection in the US made it difficult for domestic companies to develop APIs before the expiration of originator patents, the weak patent regimes in Italy and other countries at the time allowed firms located there to develop generic APIs early . In addition to these historical origins, the nature of the generics business also made vertical separation a natural outcome. Different downstream manufacturers of generic drugs produce near identical products, because, by definition, they are all bio-equivalent to the original product. Therefore, the APIs manufactured by different upstream firms are also expected to be homogeneous. This implies that in general, investments into API development by an upstream manufacturer are not specific to a particular downstream user. In other words, the investment facilitation effects of vertical integration are unlikely to be important in this industry under normal circumstances. This is analogous to Hart and Tirole’s observation that the efficiency benefits of vertical integration were unlikely to have been strong in the cement and ready-mixed concrete industries during the 1960s when the vertical merger wave took place. Nevertheless, as Figure 2.1 demonstrates, vertical integration has become more prevalent over time in the generics industry. Several possible reasons for this can be found from industry reports. One is that early development and procurement of APIs has become more important to the profitability of downstream manufacturers in recent years, particularly in markets characterized by paragraph IV patent challenges. For example, the annual report of Teva, the industry’s largest firm, describes the motive for vertical integration as follows: “to provide us with early access to high quality active pharmaceutical ingredients and improve our profitability, in addition to further enhancing our R&D capabilities.” . Karwal mentions that “having access to a secure source of API can make a significant difference, particularly relating to difficult-to-develop API, when pursuing a potential Paragraph IV opportunity, and to secure sufficient quantities for development” . Similarly, Burck notes that “Access to API and control of the development and manufacturing process to support patent challenges has often been cited as a reason for backward integration” . These comments suggest that vertical integration allows downstream manufacturers to obtain APIs sooner than they otherwise would, and that this aids them in attaining first-to-file status in paragraph IV markets. This would partly explain why the increased prevalence in vertical integration appears to have followed closely behind the increase in paragraph IV patent challenges. A second possible cause of increased vertical integration pertains to bandwagon effects. A former purchasing executive at Sandoz, one of the largest firms, mentions that firms vertically integrate to “avoid sourcing API from a competitor” . Karwal points out that “Many key API suppliers, especially from India, China and Eastern Europe, are moving up the value chain and decreasing their supply activities, becoming direct competitors in finished form generics” . He suggests that this is one of the factors behind increased backward integration by established downstream manufacturers. In the mid-2000s, traditionally unintegrated US firms in the downstream segment began acquiring API manufacturing assets. Examples include the acquisition of Indian API manufacturers by Mylan and Watson, both large US finished formulation companies. It is important that these actions, by two of the main players of the industry, took place after vertically integrated entry became common.

Agricultural irrigation tail water from flood and furrow irrigation constituted the main water source for all wetlands

In addition to concerns about food safety, microbial pathogens are considered to be among the leading causes of water quality impairment in California agricultural watersheds . Within a watershed, pathogenic bacteria and protozoa from humans, livestock, wildlife, and pets can be found in runoff and can contaminate surface water bodies . Non-point sources of pollution have become the main sources of microbial pollution in waterways, with agricultural activities, including manure application to fields, confined animal operations, pastures, and rangeland grazing, being the largest contributors . Constructed and restored wetlands have been among the few water management options proposed as being available to growers to filter and improve the quality of water in agricultural runoff that contains a wide range of contaminants . Specifically, constructed wetlands have been shown to be highly effective at removing pathogens from water . However, wetlands may also provide habitat for wildlife, including birds, livestock, deer, pigs, rodents, and amphibians, and they may in turn vector pathogens that cause human disease. These animals deposit feces and urine within the wetland, an effect that has the potential to negate any benefit from pathogen removal caused by wetland filtering . After past outbreaks of food borne illness caused by E. coli 0157:H7 borne on lettuce and spinach grown in California, some food safety guidelines have encouraged growers to reduce the presence of wildlife by minimizing non-crop vegetation, including wetlands, that could otherwise attract wildlife to farm fields growing fresh produce . In this situation, food safety guidelines may be at odds with water quality improvement measures. Many constructed and restored wetlands in California have been built with support from the USDA-NRCS through the Environmental Quality Incentives Program and the Wetland Reserve Program . Under these programs, 4×8 botanicare tray most wetland systems were initially developed to mitigate the loss of wetlands and improve wildlife habitat. A key element of the design of these systems is that they receive agricultural runoff as input flows intended to maintain the wetland’s saturated conditions .

In addition to increasing wildlife habitat, the observed water quality improvements linked with these types of wetlands have made them an attractive “best management practice” for irrigated agriculture . Our purpose in writing this publication is to show how wetlands may be used to improve water quality in agricultural settings where pathogens are a matter of concern. In addition, we will discuss wetland design and management considerations that have the potential to maximize pathogen removal and minimize microbial contamination. The following case study highlights the effectiveness of wetlands as a tool to improve water quality and demonstrates the importance of specific design characteristics. A water quality assessment of seven constructed or restored surface flow-through wetlands was conducted across the Central Valley of California. Wetlands differed in such parameters as size, age, catchment area, vegetation type and coverage, and hydrologic residence time . W-1 through W-4, located in the San Joaquin Valley and discharging into the San Joaquin River , were continuous flow wetlands. W-5 through W-7, situated in the Sacramento Valley and discharging into the Sacramento River , were flood-pulse wetlands with a water management regime consisting of flood pulses every 2 to 3 weeks, followed by drainage for 3 to 4 days prior to the next flood pulse. W-2 and W-3 shared the same input water source, and the same was the case for W-5, W-6, and W-7. Several water quality parameters were measured at input and output locations during the growing season to evaluate the systems’ ability to improve water quality. Both concentration and load are important considerations when assessing water quality constituents. Concentration represents the mass, weight, or volume of a constituent relative to the total volume of water. Load represents the cumulative mass, weight, or volume of a constituent delivered to some location.

The flow-through wetlands were most effective at reducing total nitrogen , total suspended solids , and E. coli loads , and were moderately effective at reducing total phosphorus loads. In many instances, the flood-pulse wetlands were actually a source of contaminants, as indicated in table 2 by the negative numbers they show for removal efficiency. E. coli load in outflows was significantly lower than the inflow load at all flow-through wetlands , while the flood-pulse wetlands showed significant increases in E. coli : decreases of 80 to 95% as opposed to increases in total E. coli loads, respectively. The differences in contaminant removal for flow-through versus flood-pulse wetlands can be attributed to two factors. First, the input water for the flood-pulse systems was very clean, so any introduced contaminants were readily detectable. The average E. coli concentration for input water was 62 cfu 100 ml−1 in the flood-pulse wetlands, compared to over 200 cfu 100 ml−1 in the flow-through wetlands. Second, the overly long hydrologic residence times of flood pulse systems can allow contaminants to become more concentrated through the processes of water evaporation, leaching of nutrients from soils and organic matter, and introduction of nutrients and contaminants from feces and urine deposited by wildlife that inhabit the wetlands. Enterococci and E. coli are standard federal- and state regulated constituents used as indicators of fecal contamination in water. In the flow-through wetlands , approximately 47 percent of water samples collected from irrigation return flows exceeded the EPA recreational contact water standard for E. coli of 126 cfu 100 ml−1 . In contrast, E. coli concentration in wetland outflows ranged from 0 to 300 cfu 100 ml−1. Following wetland treatment, 93 percent of wetland outflows met the California water quality standard for E. coli concentration . For enterococci, 100 percent of the input water samples exceeded the water quality standard of 33 cfu 100 ml−1.

Despite exceeding the water quality standard, the bacteria levels found here are very low when compared to other contaminated water sources, such as wastewater . Although enterococci removal efficiencies ranged from 86 percent to 94 percent , only 30 percent of the outflow enterococci concentrations met water quality standards . Results from this study indicate that by passing irrigation tail water through wetlands, a grower can significantly reduce the water’s pathogen concentration and load, as well as other water quality contaminants common to agricultural settings. Some water quality standards may never be met with wetland filtering alone, especially where the standards require extremely low values, as is the case for enterococci in irrigation water used on farms that grow produce that is intended to be consumed raw. Wetland design and management need to be considered prior to construction and throughout the life of the system. In many cases, the natural mechanisms that promote contaminant removal or retention can be manipulated through careful design, management of hydrology, and maintenance of appropriate vegetation. Natural mechanisms for reducing bacteria pathogens are not fully understood and have received only limited study in irrigated agriculture. Wetlands are known to act as bio-filters through a combination of physical , chemical , and biological factors , all of which contribute to the reduction of bacteria numbers . Where input water has a relatively low concentration , wetland background levels are so low that water passing through the wetland may actually end up with increased pathogen concentrations . As high-energy input flows disperse across the wetland, the water’s velocity decreases, and particles that had been suspended in the water settle to the bottom. The energy needed to support suspended particles in the water flow dissipates as the cross-sectional area of the wetland flow path increases, flood tables for greenhouse and vegetation reduces the water’s turbulence and velocity. The rate of sedimentation is governed by particle size, particle density, water velocity and turbulence, salinity, temperature, and wetland depth. Larger pathogens tend to settle more quickly than smaller ones. The actual removal of pathogens by means of sedimentation depends on whether the pathogens are free-floating or are attached to particles. Pathogens can be attached to suspended particles such as sand, silt, clay, or organic particulates. Microbial contaminants associated with particles, especially dense, inorganic soil particles, settle out in wetlands sooner than those in the free-floating form. Studies have shown that the rate of pathogen removal is greater in wetlands where the input waters have a high sediment load . Some wetland designs are more prone to encourage wave activity, which prevents sedimentation and encourages re-suspension of settled particulates . High wind velocities promote wave activity. Large, open-water designs are more prone to water turbulence because wind velocity increases over a large, smooth surface. Wetland vegetation can help minimize water turbulence and particle re-suspension. For example, trees planted as wind barriers surrounding the wetland decrease the amount of wind on the wetland. Emergent vegetation within the wetland can anchor sediment with its roots and can dampen the velocity of wind moving across the water surface. Dendritic wetland designs, which consist of a sinuous network of water-filled channels and small, vegetated uplands, can help reduce water turbulence associated with high winds .Vegetative cover has been shown to decrease sediment re-suspension. For example, Braskerud found that an increase in vegetative cover from less than 20 percent up to 50 percent reduced the rate of sediment re-suspension from 40 percent down to near zero. Wetland depth may also have an indirect effect on sediment retention.

The water should be deep enough to mitigate the effect of wind velocity on the underlying soil surface, but if the water is too deep, vegetation will not be able to establish and a significant increase in re-suspension of sediment will result. Water depths between 10 and 20 inches optimize conditions for plant establishment, decreased water velocity, well-anchored soil, and a short distance for particles to fall before they can settle . An excess of vegetation can significantly reduce a wetland’s capacity to retain E. coli. Maximum removal of E. coli occurs under high solar radiation and high temperature conditions , and vegetation provides shading that can greatly reduce both UV radiation and water temperatures. While vegetation can provide favorable attachment sites for E. coli, a dense foliage canopy can hinder the free exchange of oxygen between the wetland and the atmosphere. This vegetation induced barrier to free exchange of oxygen limits dissolved oxygen levels, and that in turn reduces predaceous zooplankton, further decreasing removal of microbial pathogens from the wetland environment . The plants’ uptake of pollutants, including metals and nutrients, is an important mechanism, but is not really considered a removal mechanism unless the vegetation is harvested and physically removed from the wetland. Wetland vegetation also increases the surface area of the substrate for microbial attachment and the biofilm communities that are responsible for many contaminant transformation processes. Shading from vegetation also helps reduce algae growth. However, certain types of vegetation can attract wildlife such as migrating waterfowl, which may then become a source of additional pathogens. Vegetation that serves as a food source or as roosting or nesting habitat for waterfowl may need to be reduced in some settings. Among other important considerations for vegetation coverage in wetlands, one must include total biomass and depth features. Vegetation should provide enough biomass for nutrient uptake and adsorptive surface area purposes, but must also be managed to allow sufficient light penetration to enable natural photo degradative processes and prevent accumulation of excessive plant residues, which would prevent the export of dissolved organic carbon. One way to promote this balance is to create areas of deeper water intermixed with the shallower areas. In an agricultural setting, it may be hard to establish plantings of native species within wetlands due to the large seed bank of exotic species that may be present in input waters . You can also manage the type and amount of vegetation by manipulating the timing and duration of periods of standing water in the system. In extreme instances, you can actually harvest excess biomass. In addition to managing vegetation and water depth to maximize sedimentation and pathogen photodegradation, growers can also manipulate hydrology to maximize the removal of microbial pollutants in wetlands. The importance of hydrologic residence time is apparent when you recognize that a longer HRT increases the exposure of bacteria to any removal processes such as sedimentation, adsorption, predation, impact of toxins from microorganisms or plants, and degradation by UV radiation . E. coli concentrations have been shown to increase in runoff from irrigated pastureland when the volume of runoff is increased .

One of the most critical factors affecting crop growth rate is the air flow velocity over plants

Stavrakakis et al. investigated the capability of three Reynolds Averaged Navier-Stokes models to simulate natural ventilation in buildings. Papakonstantinou et al. presented a mathematical model for turbulent flow and accordingly developed a 3- D numerical code to compute velocity and temperature fields in buildings. A novel gas-liquid mass transfer CFD model was developed by Li et al. to simulate the absorption of CO2 in a microporous microchannel reactor. Yuan et al. visualized the air paths and thermal leakages near a complex geometry using a transient thermal model with buoyancy-driven convection, conduction and thermal radiation heat transfer and flow field near a vehicle structure . In the context of agriculture, researchers have extensively employed CFD analysis for study of ventilation, air flow, and microclimate in indoor systems. Zhang et al. developed a CFD simulation to assess single-phase turbulent air stream in an indoor plant factory system and achieved the highest level of flow uniformity with two perforated tubes. Karadimou and Markatos developed a transient two-phase model to study particle distribution in the indoor environment using Large Eddy Simulation method. Baek et al. used CFD analysis to study various combinations of air conditioners and fans to improve growth rate in a plant factory . More recently, Niam et al. performed numerical investigation and determined the optimum position of air conditioners in a small vertical plant factory is over the top. In addition, a variety of mathematical techniques are proposed to provide sub-model for investigating photosynthesis. According to Boulard et al., tall canopies can induce a stronger cooling of the interior air by using a CFD model to study the water vapor, temperature, and CO2 distribution in a Venlo-type semi-closed glass greenhouse. Despite the fact that photosynthesis plays an integral role in distribution of species and uniformity along cultivation trays, rolling grow trays this issue has not been well addressed. Although numerous research works have been done to investigate the turbulent flow in enclosures and buildings, this study is the first to numerically investigate the transport phenomena considering the product generation and reactant consumption through photosynthesis and plants transpiration with CFD simulations for IVFS-based studies.

Furthermore, a newly proposed objective uniformity parameter is defined to quantify velocity uniformity for individual cultivation trays. Moreover, numerical simulations are performed to simulate and optimize fluid flow and heat transfer in an IVFS for eight distinct placements of flow inlets and outlets in this study. Accordingly, the effects of each case on uniformity, relative humidity, temperature, and carbon dioxide concentration are discussed in detail. Finally, an overall efficiency parameter is defined to provide a holistic comparison of all parameters and their uniformity of each case.In this study, three-dimensional modeling of conjugated fluid flow and heat transfer is performed to simulate the turbulent flow inside a culture room having four towers for hydroponic lettuce growth. Assuming that the four towers are symmetric, a quarter of the room with four cultivation trays is selected as the computational domain, as illustrated in Fig. 1a. Symmetry boundaries are set at the middle of the length and width of the room. The effect of LED lights on heat transfer is considered through constant heat flux boundary conditions at the bottom surface of each tray as shown in Fig. 1b. Lastly, the species transfer due to photosynthesis are occurring only in the exchange zone, which is illustrated in Fig. 1c. To study the impact of air inlet/exit locations on characteristics of air flow, four square areas, denoted as A, B, C, and D in Fig. 1a, are considered to be inlet, exit, or wall. To perform a systematic study, Table 1 presents the location of inlet and exit for all eight cases studied. With the aim of comparing all of the proposed designs, case AB is selected to be the baseline. A fluid stream with horizontal speed ranging from 0.3 to 0.5 m s−1 can escalate the species exchange between the flow and plant leaves resulting in enhancement of photosynthesis. In indoor farming systems, the flow velocity can be controlled well using ventilation fans for more efficient plant growth.

However, heterogeneous distribution of feeding air over plant trays can cause undesirable non-uniformity in crop production, which should be avoided. Therefore, it is important to study the effect of inlet-outlet location and flow rate on the flow patterns throughout the culture room. Herein, the most favorable condition is defined as the condition at which the flow velocity above all trays is equal to the optimum speed Uo, which is set to be 0.4 m s−1. The objective uniformity, OU, defined in Eq. is used to assess the overall flow conditions. The OU for all eight cases as a function of mass flow rate are summarized in Fig. 5. Since the inlet/exit area and air density remain the same, the mass flow rate is directly proportional to flow velocity. In addition, the target flow velocity over the plants is set to be 0.4 m s−1. Therefore, a general trend of OU first increases and then decreases when increasing the overall mass flow rate. Depending on the design, the peak of OU occurs at different mass flow rate for each case. Another general trend can be observed that the peak of OU occurs at a lower mass flow rate if the inlet is located at the top due to buoyancy force. This can be clearly demonstrated by cases AB and BA or AD and DA . Therefore, there exists a different optimal inlet/exit design for each mass flow rate condition. As can be seen from Fig. 5, the maximum OU at flow rates of 0.2, 0.3, 0.4 and 0.5 kg s−1 is observed for configurations AD, BC, BA, and DA, respectively. Therefore, this simulation model can identify optimal flow configuration at a specific mass flow rate condition. Since OU quantifies the deviation of average velocity of each tray from the designed velocity, a higher OU value indicates that the crops will have better and more uniform photosynthesis. It can be observed from Fig. 5 that the maximum OU obtained for all conditions is case BC at a flow rate of 0.3 kg s−1. To develop a better understanding, the two-dimensional velocity and vorticity distributions in the x-y plane along the middle of the z-direction for all eight cases at a mass flow rate of 0.3 kg s−1 are plotted in Figs. 6 and 7.

As can be observed from Figs. 6 to 7, the OU is highest for case BC due to its uniform velocity and vorticity distributions between trays. This can be attributed to the position of inlet/exit location with respect to the tray orientation. For case BC, the inlet flow is parallel to the longitudinal direction of the tray and the exit is along the transverse direction . This design allows the flow to travel through the long side of the tray uninterrupted and then form a helical flow orientation near the end of the tray. This spiral formation of flow induces a more uniform and regular flow in the room. This also explains why case AD has very high OU. Similar spiral formation can also be observed when the inlet flow is parallel to the transverse direction of the tray and the exit is along the longitudinal direction , like case DA. However, since the inlet flow is along the short side of the tray, the benefit is not as great and requires much higher inlet mass flow rate. On the other hand, for cases where the inlet and exit are located on the same wall, such as AB or CD, the air flow only has strong mixing effect along the inlet/exit direction which, in turn, reduces the overall flow uniformity. Besides the velocity distribution, horticulture trays the effect of temperature is also a critical parameter for determining convective flow. Fig. 8 shows the two-dimensional temperature distributions in the x-y plane along the middle of the z-direction for all eight cases at a mass flow rate of 0.3 kg s−1. In our analysis, the temperature of the inlet flow is lower than that of the exit flow due to the heat generated from the LED light. For case BC, the inlet is located near the bottom and the exit is near the top. Due to the density difference, the exit warm stream tends to flow up. This allows the flow to reach the topmost tray more easily and, therefore, achieves more uniform temperature distribution among all trays. Combining the inlet flow along the long side of the tray, the helical flow effect, and the buoyancy, case BC is able to reach the maximum OU of 91.7%. Fig. 9 summarized the velocity and temperature contours for case BC at an inlet mass flow rate of 0.3 kg s−1. The velocity pro- files in Fig. 9a clearly show the spiral effect above each cultivation tray and the local velocity is close to the optimal speed of 0.4 m s−1. In addition, the temperature shows an increasing trend from bottom to top as the flow helically passing through the crops and moving towards the outlet.The distributions of temperature and gas species, such as water vapor and CO2, play an integral role in photosynthesis which, in turn, influences the quality of plant and its growth. Therefore, maintaining these critical parameters in a reasonable range to ensure reliable and efficient production is essential to environmental control of an IVFS. Evaluating the distribution of these parameters can also provide the effectiveness of inlet/exit location. It should be noted that the parameter OU provides an overall assessment of the air flow velocity over planting trays. An optimal design is to achieve desired local temperature and species distribution while maintaining high OU values in an IVFS. In the following discussion, the four cases with highest values of OU at their corresponding mass flow rates are studied and compared to the baseline case AB.Since CO2 is a reactant of photosynthesis, increasing CO2 concentration usually leads to enhancement of crop production. Reports show that increasing the CO2 concentration from the atmospheric average of 400 ppm to 1500 ppm can increase the yield by as much as 30%. In this IVFS analysis, the CO2 level of the inlet mass flow rate is increased by a CO2 generator to be 1000 ppm . Since the consumption rate of CO2 through the exchange zones is fixed, higher overall average CO2 concentration through the system is desirable. Fig. 10 shows the comparison of the average CO2 concentration between the highest OU cases and the baseline case AB at different inlet mass flow rate. A few general trends of CO2 concentration can be observed from Fig. 10. First, the CO2 concentration increases with inlet flow rate due to increasing supply of CO2 molecules. In addition, tray 1 has the highest CO2 concentration because most of the cold fresh inlet air dwells near the bottom of the IVFS due to the buoyancy effect. In contrast, tray 3 has the lowest CO2 concentration because the fresh inlet air has the highest flow resistance to reach tray 3due to the combination of sharp turns and buoyancy effect. This is particularly true at low inlet flow rates and when the inlet is located on the top, which lead to low flow circulation as cold inlet air flows downward directly. As a result, BC, BA, and DA at 0.3, 0.4, and 0.5 kg s−1, respectively, have relatively high CO2 concentrations. Even though the baseline case AB at 0.5 kg s−1 has the highest CO2 concentration, its OU is too low to be considered a good design. Temperature is also a critical parameter to control and monitor because it directly affects both relative humidity and plant growth. The temperature distribution in the system depends on the inlet/exit location, inlet mass flow rate, and amount of heat. Since the inlet temperature and heat flux conditions are fixed, the exit temperature increases with decreasing inlet mass flow rate. Fig. 11 shows a comparison of the average temperatures of the higher OU cases and the baseline case AB at different inlet mass Fig. 12. Comparison of the average RH over each tray between the best OU cases and the baseline case at each inlet mass flow rate condition. flow rates.

Object/mouse side placement was counterbalanced between trials

Sparse metal bars allowed for paw access to the smooth acrylic floor, whereas dense-wire mesh did not. For high-fat diet CPP, animals were given one pellet of standard chow and an isocaloric amount of high-fat food . As high-fat pellets have a different color and consistency, they were also given to home cages the day before pre-conditioning to prevent neophobia. Statistical analyses. Results are expressed as means ± SEM. Significance was determined using two-tailed Student’s t-test, One-way or Two-way analysis of variance with Tukey’s post-hoc test and differences were considered significant if P<0.05. Analyses were conducted using GraphPad Prism .The use of marijuana is reinforced through activation of the mesolimbic reward circuit . In a related but distinct modulatory process, the neurotransmitter system mediating the effects of marijuana in the brain – the endocannabinoid system – also facilitates the reward of other stimuli, such as food or drugs of abuse . The endocannabinoid system has three main components: two lipid-derived local messengers – 2- arachidonoyl-sn-glycerol and anandamide , enzymes and transporters that mediate their formation and elimination, and receptors that are activated by endocannabinoids and regulate neuronal activity . Genetic and pharmacological studies have unveiled key roles of the CB1 receptor in the modulation of reward signaling. Less is known about the functions served by individual endocannabinoid messengers. In particular, an emerging question is whether endocannabinoids might also regulate the reward of social interactions. We recently demonstrated that anandamide regulates social reward via cooperative signaling with oxytocin, 4×4 grow table a neuropeptide that is crucial for social bonding . The role of 2-AG remains unknown, however. One way to assess the specific contribution of individual endocannabinoids is to manipulate the enzymes responsible for their formation and deactivation.

For example, pharmacological inhibition or genetic deletion of the enzyme that hydrolyzes anandamide, fatty acid amide hydrolase , markedly increases anandamide activity at CB1 cannabinoid receptors . Analogous strategies exist for 2-AG. Indeed, MGL-/- mice, in which the 2-AG-hydrolyzing enzyme monoacylglycerol lipase is deleted, andthus 2-AG levels are elevated, as well as DGL-α-/- mice, in which the 2-AG-synthesizing enzyme diacylglycerol lipase is deleted, and thus 2-AG levels are very low . However, the effects of these radical modifications are often difficult to interpret because of the emergence of profound compensatory changes in the brain, such as desensitization of CB1 receptors and elevation in anandamide and arachidonic acid levels . We have recently generated a novel transgenic mouse model – MGL-Tg mice – which selectively overexpress MGL in forebrain neurons under the control of the CaMKIIα promoter . These mutant mice display a forebrain-selective accrual in MGL hydrolyzing activity and a 50-75% decrement in 2-AG content. This reduction in 2-AG is not accompanied by overt changes in levels of other endocannabinoid-related lipids , cannabinoid receptors, or other endocannabinoid-related proteins . To investigate the role of 2-AG in reward-related behaviors, we tested MGL-Tg mice in conditioned place preference paradigms for high-fat food, social, or cocaine stimuli. Based on a rich theoretical framework, CPP assesses the rewarding value of test stimuli by pairing them with neutral environments . Because less is known about endocannabinoid signaling and social behavior, we also investigated the effects of social interaction on 2-AG signaling in reward-related regions of the brain. We hypothesized that MGL-Tg mice are deficient in reward signaling and that rewarding social stimuli drive 2-AG signaling in normal mice.Socially conditioned place preference . Procedures were previously described . Briefly, mice were placed in a two-chambered acrylic box .

A 30-min preconditioning session was used to establish baseline neutral preference to two types of autoclaved, novel bedding . These differed in texture and shade . Individual mice with strong baseline preference for either type of bedding were excluded – typically, those that spent more than 1.5x time on one bedding over the other. The next day, animals were randomly assigned to a social cage with cage-mates to be conditioned to one type of novel bedding for 24 h , then moved to an isolated cage with the other type of bedding for 24 h. On the next day, animals were tested alone for 30 min in the two-chambered box to determine post-conditioning preference for either type of bedding. Bedding volumes were 300 mL in each side of the two-chambered box and 550 mL in the home-cage. Familiar animals from the same cage were tested concurrently in four adjacent, opaque CPP boxes. Between trials, boxes were thoroughly cleaned with SCOE 10X odor eliminator . Scoring was automated using a validated image analysis script in ImageJ .Cocaine and high-fat diet CPP. Procedures were previously described . Briefly, these paradigms were similar to social CPP, including unbiased and counterbalanced design, cleaning and habituation, exclusion criteria, and scoring, except for the following main differences, which followed reported methods . Mice were conditioned and tested in a two-chambered acrylic box . Pre- and post conditioning tests allowed free access to both chambers and each had durations of 15 min and 20 min . For conditioning, animals underwent 30-min sessions alternating each day between saline/cocaine or standard chow pellet/high-fat pellet . The two chambers offered conditioning environments that differed in floor texture and wall pattern – sparse metal bars on the floor and solid black walls vs. dense-wire-mesh floors and striped walls. Sparse metal bars allowed for paw access to the smooth acrylic floor, whereas dense-wire mesh did not. For high-fat diet CPP, animals were given one pellet of standard chow and an isocaloric amount of high-fat food . As high-fat pellets have a different color and consistency, they were also given to home cages the day before pre-conditioning to prevent neophobia.Intake of high-fat pellets was recorded in free feeding mice using an automated monitoring system , as described previously . Food intake was measured for two days, and the average of intake was normalized to the body weight at the start of feeding.The test was conducted according to established methods .

To mimic the conditions of the social CPP task, mice were first isolated for 30 min and tested in dim-light conditions . Pairs of mice were tested in an open field arena for 5 min. Scoring for social interaction time included behaviors such as sniffing, following, grooming, mounting and crawling over or under. Passive interaction, in which mice were in close proximity but without these interactions, was not included in the scoring.The procedure was previously described , which was based on an established protocol . Briefly, test mice were first habituated to an empty three-chambered acrylic box , including to the center chamber for 10 min, cannabis drying system and then to all chambers for 10 additional min. Mice were then tested for 10 min. Subjects were offered a choice between a novel object and a novel mouse in opposing side chambers. The novel object was an empty inverted pencil cup and the novel social stimulus mouse was a sex, age and weight-matched 129/SvImJ mouse. These mice were used because they are relatively inert. They were trained to prevent erratic or aggressive behaviors, such as biting the cup. Weighted cups were placed on top of the pencil cups to prevent climbing. Low lighting was used. The apparatus was thoroughly cleaned with SCOE 10X odor eliminator between trials to preclude olfactory confounders. Chamber time scoring was automated using image analysis.Sniffing time was scored by trained assistants who were unaware of treatment conditions. Outliers in inactivity or side preference were excluded.The procedure was previously described . Briefly, whole brains were collected and flash-frozen in isopentane at -50 to -60 °C. Frozen brains were transferred to -20°C in a cryostat and kept for 1 h to attain local temperature. The brain was then cut to the desired coronal depth and micropunches from bilateral regions of interest were collected using a 1×1.5-mm puncher . The micropunches weighed approximately 1.75 mg. A reference micropunch was taken to normalize each punch to the brain’s weight. Bilateral punches were combined for lipid analyses.Procedures were previously described . Briefly, tissue samples were homogenized in methanol containing internal standards for H2 -anandamide , H2 -oleoylethanolamide and 2 H8-2-arachidonoyl-sn-glycerol . Lipids were separated by a modified Folch-Pi method using chloroform/methanol/water and open-bed silica column chromatography. For LC/MS analyses, we used an 1100 liquid chromatography system coupled to a 1946D-mass spectrometer detector equipped with an electrospray ionization interface . The column was a ZORBAX Eclipse XDB-C18 . We used a gradient elution method as follows: solvent A consisted of water with 0.1% formic acid, and Solvent B consisted of acetonitrile with 0.1% formic acid. The separation method used a flow rate of 0.3 mL/min. The gradient was 65% B for15 min, then increased to 100% B in 1 min and kept at 100% B for 14 min. The column temperature was 15°C. Under these conditions, Na+ adducts of anandamide/H2 -anandamide had retention times of 6.9/6.8 min and m/z of 348/352, OEA/H2 -OEA had Rt 12.7/12.6 min and m/z 326/330, and 2-AG/2 H8-2-AG had Rt 12.4/12.0 min and m/z 401/409. An isotopedilution method was used for quantification.MGL-Tg mice eat less chow than do their wild-type littermates . The food intake phenotype, however, does not dissociate the effects of 2-AG signaling on metabolic and reinforcement processes.

Furthermore, altered feeding can be interpreted as either decreased or increased reward to the food stimulus. To isolate the effects of reduced 2-AG signaling on reward, we tested MGL-Tg mice and their wild-type littermates in a CPP task for high-fat food. In a standard CPP box, mice were conditioned for 30-min sessions to either standard chow or isocaloric high-fat food for 6 sessions each, alternating over 12 days total . In WT mice, we found that this conditioning protocol was sufficient to elicit a preference for the high-fat-paired chamber during post-conditioning testing. Animals spent 137 seconds more in the high-fat chamber compared to the standard-chow chamber . In contrast, MGL-Tg mice did not develop a preference for either chamber . This result suggests that 2-AG signaling is involved in conditioned reward processes of high-fat food. We then asked whether this role for 2-AG signaling could translate to the reward produced by social interaction . We conditioned mice for 24 h with cage-mates in their home-cage to one type of bedding, then we conditioned them for 24 h isolated to another bedding . In the post conditioning test in a standard CPP box, we found that this conditioning was sufficient to elicit a preference in WT mice for the social bedding . In contrast, MGL-Tg mice did not develop a preference for either bedding . Together with the high-fat-food CPP results, these results suggest that 2-AG signaling may underlie aspects of reward processes common to both natural stimuli.The lack of CPP can be attributed to impairments in the generation and processing of the reward, the consolidation of the memory for the reward, or a combination of these processes. To evaluate whether high-fat food stimuli are generated and processed properly, we measured initial intake of high-fat pellets over 2 days. MGL-Tg mice show a 16% reduction in normalized intake compared to WT littermates over this period . The combined phenotype of MGL-Tg mice showing a lack of CPP and decreased intake suggests that 2-AG plays a role in the generation and processing of high-fat food reward. Strict interpretation of these results, however, may be complicated by the role of 2-AG in energy metabolism . For the same reason, we also examined the direct social activity and the social approach interest of MGL-Tg mice using the social interaction test and the three chambered social approach test , respectively. These tests differ in two key ways: the social interaction test evaluates interactions that are reciprocal and direct, whereas the social approach test measures approach activity to a stimulus mouse that is sequestered in an inverted wire cup; the social interaction test uses familiar cage-mates, whereas the social approach test uses a novel mouse as a stimulus. In the social interaction test, we observed that MGL-Tg mice trend toward less interaction time, but this result was not significant . In the social approach test, we found that both MGL-Tg and WT mice preferred the social chamber over the object chamber and sniffed the stimulus mouse more than the object . MGL-Tg mice were similar to WT mice in the amount of time spent in the social chamber and sniffing the stimulus mouse .

Federal and state regulations for anticoagulant rodenticide usage are specific for both generations

In addition, there are stark differences for acute LD50 doses among genera, where minute amounts of brodifacoum bait caused death in domestic canids but domestic felids required doses 5 to 40 times higher . The same variability seen in both mustelids and other carnivores suggests that predicting clinical thresholds for fishers would be pre-mature. Furthermore, AR exposed fishers had an average of 1.6 AR types within their systems, and possible interaction effects from a combination of 2 or more AR compounds within a fisher and other species are entirely unknown.Spatial analyses did not reveal any obvious point sources of AR exposure. Instead, these analyses suggested that exposure is widespread across the landscape. Previous studies expected that exposure to AR compounds would be clustered near areas of human activity or in habitations and that exposure would not be common outside of these areas. Incongruously, data from this study refuted this hypothesis thus making the finding even more significant. Furthermore, these exposures occurred within a species that is not closely affiliated with urban, peri-urban or agricultural settings in which second-generation ARs typically are. Before the June 2011 Environmental Protection Agency regulations, second generation class ARs could be purchased at local retailers, vertical farming system with recommendations for placement in weather- and tamperresistant bait containers no more than 50 feet from any building. However, since June 2011, second generation ARs have not been available to consumers at retail, but only at agricultural stores with additional form and weight restrictions.

These newly passed regulations are aimed at further restriction of irresponsible and illegal use of ARs. However, we would have expected that with either pre- or post-June 2011 regulations, second generation AR exposed fishers would have overlapped with urban, peri-urban, or agricultural environments. This pattern is acknowledged in several studies, such as Riley et al. where bobcat and mountain lion total quantification levels of AR exposure were associated with human-developed areas. Numerous studies have documented that secondary poisoning cases are closely associated with recent agricultural or urban pest eradication efforts. The majority of habitat that fishers in California and fishers throughout the DPS currently and historically occupied is not within or near agricultural or urban settings. Several fishers that were exposed had been monitored their entire lives and inhabited public or community lands where human structures are rare or non-existent . Therefore, exposure from first or second generation AR use at or within 50 feet of residential or agricultural structures and settings were considered unlikely due to fisher habitat requirements and general lack of association with humans. This suggests that wide-spread non-regulated use of second generation second generation ARs is occurring within the range of fishers in California, especially on public lands. A likely source of AR exposure to fishers is the emerging spread of illegal marijuana cultivation within California public and private lands. In 2008 in California alone, over 3.6 million outdoor marijuana plants were removed from federal and state public lands, including state and national parks, with thousands of pounds of both pesticides and insecticides found at grow sites. In 2011, a three week eradication operation of marijuana cultivation removed over 630,000 plants and 23,316 kg of trash including 68 kg of pesticides within the Mendocino National Forest in the northern California fisher populations range. Anticoagulant rodenticides and pesticides are typically dispersed around young marijuana plants to deter herbivory, but significant amounts of AR compounds are also placed along plastic irrigation lines used to draw water from in order to deter rodent chewing .

A recent example in which over 2,000 marijuana plants were removed less than 12 km from one of the project areas revealed that plants on the peripheraledges as well as nearby irrigation had large amounts of second generation AR placed . Finally, just within a single eradication effort, multiple kilometers of irrigation line within National Parks and Forests in California were removed. Placement of ARs at the grow sites and along irrigation lines which jut out great distances from the grow site itself may explain why there are no defined clusters of AR exposure. It is noteworthy that the AR fisher mortalities we documented occurred in different areas of their California range but within a relatively short seasonal period between mid-April to mid-May. We cannot specify the exact explanation or source contributing to all AR mortalities that occurred within this short temporal period. This period is when females are providing for offspring as well as males searching for mates; however, preliminary spatial data for fishers in California document that females have more confined home-ranges during this period, while males have slightly larger home-ranges . Additionally, several books available to the general public identify the optimal time for planting marijuana outdoors is during mid to late spring, and seedlings are especially vulnerable to rodent pests . Of additional concern is that April to May is the denning period for female fishers and a time when fisher kits are entirely dependent on their mothers. The documentation of a lactating female mortality attributed to AR toxicosis during this period suggests that most likely kits would be abandoned and die from female mortalities during this time. In conclusion, this study has demonstrated that fishers in the western DPS, which are of conservation concern and a candidate for protection under the Endangered Species Act, are not only being exposed to ARs, but ARs are a direct cause of mortality and indirect mortality in both of California’s isolated populations. Consequently, these toxicants may not only pose a mortality risk to fishers but could also pose significant indirect risks by depleting rodent prey populations upon which fishers depend.

The lack of spatial clustering of exposed individuals suggests that AR contamination is widespread within this species’ range and illegal or irresponsible use of ARs continues despite recent regulatory changes regarding their use. Because we do not know the long term ecological ramifications of these toxicants left on site long after marijuana grows are dismantled, heightened efforts should be focused on the removal of these toxicants at these and adjacent areas at the time of dismantling. Further regulation restricting the use of ARs to only pest management professionals as well as continued public outreach through state wide Integrated Pest Management programs may be warranted. In addition, promotion of compounds that do not possess the propensity for secondary poisoning should be considered in non-professional use settings. Furthermore, ARs in these habitats may pose equally grave risks to other rare and isolated California carnivores such as the Sierra Nevada red fox , American marten , wolverine , gray wolf or raptors such as northern spotted owls , California spotted owls and great gray owls . Future research should be directed to investigating potential risks to prey populations as well as other sympatric species that may allow a better understanding of the potential AR sources contributing to these exposure and mortality rates from anticoagulant rodenticides.Edge detection is the first step of human visual perception and is fundamentally important in the human visual system . Edge detection significantly reduces the amount of data to be processed, vertical farming racks since it extracts meaningful information and preserves important geometric features. To detect the edges of an object, the object information is processed by either digital computation or analog computation. In practice, as an optical analog computation element, spatial differentiator enables massively parallel processing of edge detection from an entire image, which offers advantages over digital computation: It can deal with realtime and continuous image processing with high speed and is power-saving in specialized computational tasks . During the past few years, optical meta materials and meta surfaces have been suggested to perform analog spatial differentiation for edge detection which show superior integration capability compared with the traditional bulky system comprising lenses and spatial filters . A suitably designed meta material structure was theoretically proposed to perform desired mathematical operations including edge detection as light propagates through it . Deliberately designed layered structure was also suggested for spatial differentiation when an incident beam is reflected from it . Plasmonic dark-field microscopy utilizes near-field surface plasmon waves to excite the object, and can be also treated as an efficient approach for edge detection . However, to the best of our knowledge, free space broadband edge detection has not been reported yet because either the system can only be applied for surface imaging or the fabrication involved is too complicated . Here, we propose a mechanism to implement an optical spatial differentiator consisting of a designed Pancharatnam–Berry -phase meta surface inserted between two orthogonally aligned linear polarizers . Unlike other spatial differentiator approaches, our method does not depend on complex layered structures or critical plasmonic coupling condition, but instead is based on spin-to-orbit interactions. Experiment confirms that broadband optical analog computing enables the edge detection of an object and achieves tunable resolution at the resultant edges. Furthermore, meta surface orientation-dependent edge detection is also demonstrated experimentally.As shown in Fig. 2A, the sample is made of form-birefringent nanostructured glass slabs. The diameter of the glass substrate is 2.5 cm, the thickness is 3 mm, and the pattern area of the sample is 8 mm by 8 mm. The meta surface pattern is fabricated by a femtosecond pulse laser inside of glass, 50 μm beneath the surface. Under intense laser irradiation, a plasma of high free electron density is generated by a multi-photon ionization process. The interference between the plasma and the incident light beam leads to the stripe-like nanostructure as reported . By carefully controlling the polarization of incident beam, the desired orientation of the nanostructure, which is perpendicular to this polarization, can be obtained. More fabrication details could be found in our previous work . We utilized the polariscopy method to demonstrate the local optical slow-axis orientation of this birefringent structure .

As shown in Fig. 2B, crossed linear polarizer imaging under 80× magnification emphasizes the transverse gradient pattern of the optical axis, which corresponds to the dotted square area in Fig. 2A. We can clearly see the local orientation of the microscopic structures, i.e., the slow-axis distribution φðx,yÞ of laser-induced form birefringence. The red bars indicate the orientation of the slow axis over one period of this sample. The nanostructures are on the order of 30∼100 nm, as indicated from the scanning electron microscope image in Fig. 2B, Inset. The structure dimension is much smaller than the working wavelength, therefore we can treat it as a birefringence medium with spatially variant optical slow axis. When the light beam passes through the designed inhomogeneous birefringent medium with locally varying optical axis orientations and homogeneous retardation, it will acquire a spatially varying PB phase .The first lens yields the Fourier transform of the object at its back focal plane, which is exactly the position of the meta surface. In turn, the second lens performs another Fourier transform, delivering a duplicate of the object. When the light passes through the 4f system, we obtain two vertically shifted LCP and RCP images with overlapping area being linear-polarized as shown in Fig. 3 A–C. The amount of shift of the two images is difficult to see due to the small phase gradient of the meta surface. To block the overlapping area while preserving circularly polarized edge, we put an analyzer after the meta surface so that only the edges can go through, as displayed in Fig. 3 D–F. The wavelengths are chosen as 430, 500, and 670 nm, which not only confirms the proposed concept of edge detection, but also demonstrates its broadband capability. The broadband property of our meta surface originates from the geometric phase of the nanostructure orientation, which is intrinsically independent of wavelength . Additionally, the transfer function of the whole edgedetection system is experimentally measured and provided in SI Appendix, Fig. S3, which shows a typical response for the edgedetection function . Additionally, we demonstrate the tunable resolution of the edge images corresponding to different PB phase gradient period Λ. For this experiment, we choose the UCSD Triton insignia as our object . Fig. 4 A–D shows the photos of four meta surfaces with Λ equal to 500, 750, 1,000, and 8,000 μm. Fig. 4 E–H corresponds to the polariscope optical image of the meta surfaces of the first row, which shows different numbers of period within the same field of view .

Absentee owners were also more likely to be concerned that growers were taking over public land

We provided 36 statements that corresponded to four themes: community ; the environment ; changes over time in property values, community safety, community demographics and so on and grower demographics . Respondents were asked to agree or disagree with the statements using a 5-point Likert scale and were able to provide comments after each subsection. The third section of the survey solicited background information about each respondent. Respondents were asked whether they earned income from timber, ranching or dairying, how long their families had owned the land they worked and whether they were absentees. In addition, we asked landowners if they had been approached about selling their land for cannabis cultivation and if they had next-generation succession plans for the family ranch or timber business. We also asked if landowners knew of nearby cannabis growing.As indicated previously, all respondents included in our survey owned at least 500 acres of land. Twenty two percent owned between 500 and 1,000 acres, 51% owned between 1,000 and 5,000 acres and 28% owned more than 5,000 acres. Of the 69 landowners whose responses were included in our results, 63 respondents managed timberland and 56 respondents managed ranchland, meaning that most respondents managed both land types; only one respondent was involved in dairy farming. Forty-six percent of respondents lived on their properties full time, while 20% lived on their properties part time. Thirty-three percent of respondents were absentee landowners. In general, bud drying rack the land represented in the survey had been in respondents’ families for a long time — more than 50 years in 81% of the cases, 25 to 50 years in another 10% of the cases, less than 25 years in 6% and less than 5 years in only 3% of the cases.

Fifty percent of respondents reported that their primary income was from traditional forms of agriculture or timber production; no respondents reported cannabis as their primary income source.Seventy-one percent of landowners reported that they did not grow cannabis on their property while 18% reported that they did. These percentages, however, are derived only from the 34 of 69 respondents who agreed or disagreed with the statement that they had used their property to grow cannabis. The remaining respondents — half the total — chose not to indicate whether they had grown cannabis, potentially indicating landowners’ reluctance to associate themselves with the cannabis industry. About 40% of respondents had indirectly profited from cannabis through off-farm work such as heavy equipment work, trucking and so on . Fifty-seven percent of all respondents agreed or strongly agreed with the statement that “the cannabis industry has negatively affected my livestock operations,” while 27% disagreed with this statement. Over 60% of respondents agreed that cannabis had increased the cost of labor. Comments that respondents offered on the cost of labor included “Property values are inflated by the cannabis industry, hence costing us more for leases and ownership.”Seventy-five percent of respondents agreed or strongly agreed with the statement that “shared roads have been degraded by cannabis growers” and 65% agreed that noise pollution has increased due to cannabis growing. Fifty-five percent of respondents agreed that growers increase light pollution and 71% reported having experienced illegal garbage dumping by cannabis growers on or near their property. Forty percent of landowners disagreed or strongly disagreed with the statement that “I know growers who have values that align with my own” . At the same time, 34% of respondents agreed or strongly agreed with that statement . One respondent added that “[M]onetary impact is obvious.

Cultural and moral impacts are terrible.”Fifty-six percent of respondents agreed or strongly agreed that water sources have been impacted by cannabis growers, while 25% disagreed with this statement. Fifty-six percent also agreed that water had been stolen from their property. Seventy-two percent of respondents had experienced trespassing, while 20% had not. Forty percent of respondents reported that their fencing or infrastructure had been destroyed by cannabis growers, though a similar percentage had not. Fifty percent of landowners reported that neighboring growers had failed to assist with fence maintenance, and 75% of landowners reported having discovered trespass grows on their property . One respondent added that “[Growers’] dogs killed our cattle. My brother confronted a grower in fatigues carrying an assault rifle on our property. [Our] fences have been wrecked, roads damaged, and stream water theft.” Another respondent wrote that “Yes, this is true in the past, but with the pot market collapsing I don’t think this will be a problem in the future”.Roughly 55% of landowners reported having been threatened by cannabis growers’ dogs while 24% did not. Forty-six percent of landowners reported that their safety had been threatened by growers. Equal proportions of landowners reported, and did not report, having felt unsafe due to interactions with growers on public lands. Finally, 50% of landowners agreed that growers had committed crimes against them or their Property.Perceptions of cannabis growers were relatively unified among survey respondents. A majority of respondents did not perceive growers as having values similar to their own . The majority of landowners felt that growers had changed how it feels to live in their community , and 77% of landowners expressed concern about the changes that growers are bringing to their community. More than 80% of respondents were concerned about growers taking over working lands in their communities, and the same percentage were concerned that growers reduce the influence in the community of timber managers and ranchers.

One respondent wrote that “The bottom line is that our family would accept the negative economic impact of eliminating ‘pot’ in return for the elimination of all the negative impacts of the grower culture.” More than 90% of respondents agreed that growers from urban locations do not understand rural land management. Most landowners disagreed that growers are reinvigorating their rural communities or that growers are the only thing keeping their communities going . Eighty three percent of respondents disagreed with the statement that growers do a good job of policing themselves. Most landowners have not changed their views on cannabis with medical or recreational legalization .The clear majority of respondents did not think cannabis growers manage timberlands sustainably and a similar percentage felt the same about ranchlands. Eighty-five percent of respondents regarded cannabis growing as negatively affecting wildlife and 87% regarded it as negatively affecting stream flow . Eighty-four percent thought cannabis growing leads to soil erosion and 70% thought it increases fire hazard. Seventy-eight percent believed that cannabis production in ranchlands and timberlands leads to habitat fragmentation and the same percentage suggested that the economic value of cannabis incentivizes the subdivision of large parcels.Fifty percent of landowners felt that their property value had increased due to cannabis production while 40% were neutral on that question. Eighty-three percent of respondents thought that Humboldt County was a safer place before cannabis and 76% of respondents perceived new cannabis growers as less responsible than cannabis growers who have been in the county for years. About half of respondents believed that increased cannabis legalization will be good for Humboldt County. Fifty-seven percent of respondents were not yet willing to accept that cannabis is a leading industry and that people should support it. Fifty-four percent of respondents believed that Humboldt County would be better off in the future without cannabis.Most landowners included in the survey reported having observed changes in grower demographics in the last decade. Most felt that the number of small cannabis growers is decreasing. Sixty-one percent felt that the number connected to organized crime is increasing and perceived that there is an increasing number of green rush growers in their communities. Most respondents were concerned about organized crime, vertical grow rack system while only 48% were concerned with green rush growers and 18% with small growers.Overall, resident and absentee owners expressed similar views on most issues. Of the survey’s 59 statements on experiences and perceptions, statistically significant differences between the two groups appeared for only eight statements. Absentee owners were more likely to report that their surface water resources had been impacted by growers; that their fences or infrastructure had been destroyed by growers; that their safety had been threatened by growers and that they had been threatened by growers on public land. They were less likely to agree that growers manage timberland sustainably and that cannabis production decreases their property values.

With this study, we aimed to better understand the experiences and perceptions of traditional agricultural producers — the families who, in most cases for several generations, have made a living off their land, all the while watching changes occur in the social, economic and environmental dynamics that surround cannabis. This survey’s documentation of social tensions may not come as a surprise to those who have lived in Humboldt County . Even after many decades of cannabis cultivation, traditional agricultural producers have not warmed to the people or practices involved in the cannabis industry. Indeed, changes in the social fabric of the cannabis industry have only perpetuated and intensified existing tensions. As this survey shows, concerns about “small growers” are minimal now — those growers have become part of the community, and one-third of respondents agreed that they know growers whose values align with their own. What was novel 40 years ago is now a cultural norm. Today’s concerns center instead on the challenges of current cannabis culture: environmental degradation and the threat of major social and economic change. Respondents mostly agreed that growers today are less reasonable than those who have been in the county for many years. As one respondent wrote, “Growers are a cancer on Humboldt County.” This distrust highlights the challenges that, in rural areas, can often hinder community-building and mutual assistance mechanisms, which are often needed in isolated communities . The economic influence of cannabis can be seen throughout the county. As the survey shows, approximately 40% of respondents have been impacted indirectly by the cannabis industry, and some respondents have directly profited through cannabis production themselves. Interestingly, just over half the respondents chose not to say whether they grow cannabis, hinting at the possibility that, even for traditional agricultural producers, cannabis has presented an opportunity to supplement income and cover the costs of landownership. However, the broader economic growth attributed to the cannabis industry is not always viewed favorably, and a majority of respondents agreed that Humboldt County would be better off in the future without cannabis. Some respondents claimed that the industry has increased the cost of labor and that, in many cases, it can be difficult to find laborers at all because the work force has been absorbed by higher-paying cannabis operations. Likewise, many respondents agreed that land values have increased because of cannabis. But for landowners whose property has been passed down through generations, and who have little intention of selling, increased land values translate into increased taxes and difficulty in expanding operations, both of which can be limiting for families who are often land-rich but cash-poor. One respondent wrote, “Yes, the price of land has gone up… but this is a negative. It increases the inheritance tax burden, and it has become so expensive that my own adult children cannot afford to live here.” In Humboldt County’s unique economic climate, it’s difficult for most landowners to decide whether the opportunities the cannabis industry provides are worth the toll that they believe the industry takes on their culture and community — it’s not a simple story. As one respondent noted, “If I had taken this survey 40 years ago, my response would have been very different. With Humboldt County’s poor economy, everyone is relying on the cannabis industry in one way or another.” Our survey provides an important baseline from which such changing attitudes can be measured. Our results should be seen in the context of larger trends involving population and agricultural land in Humboldt County. At the time we were preparing our survey, property records indicated that slightly more than 200 landowners in the county owned at least 500 acres; these individuals made up our survey population. Past research, however, has documented that cannabis was likely grown on over 5,000 distinct parcels in Humboldt County in 2016 .