Tag Archives: vertical cannabis

The vertical entry model builds on the large and growing empirical literature on static market entry

The section also discusses the possible motives for, and effects of, vertical integration in this industry. Section 3.4 presents the econometric specification, followed by a description of the estimation strategy in Section 3.5. After describing the data in Section 3.6, I present the estimation results, including the findings from policy simulation, in Section 3.7. A concluding section follows.Vertical integration has three main types of efficiency effects. The first one is the elimination of double marginalization . Double marginalization occurs when oligopolistic markups are charged in both the upstream and downstream segments. By eliminating the markup in the upstream segment, a vertically integrated firm enjoys a cost advantage over its unintegrated downstream rivals. Chipty finds evidence from the cable TV industry that is consistent with the elimination of double marginalization by vertically integrated firms. The second type of efficiency effect arises from the ability of vertically integrated firms to carry out higher levels of non-contractible relation-specific investments . There is an abundance of empirical research – recent examples of which include Woodruff and Ciliberto – indicating the existence of such investment-facilitation effects. The third type of efficiency effect relates to the ability of vertically integrated firms to secure the supply of an intermediate good or more generally, vertical growing racks to improve coordination in logistics. Theoretical models that explore this aspect of vertical integration – namely, Carlton and Bolton and Whinston – find that the overall effect on market outcomes is indeterminate. Meanwhile, Hortac¸su and Syverson’s empirical analysis of the cement and ready-made concrete industries finds that vertical integration motivated by logistical concerns has a price-lowering effect.

One issue that has not been addressed in the literature is the possibility that the positive efficiency effects of vertical integration may spill over to other firms that are not vertically integrated. Such efficiency spillovers would reinforce the price-lowering or quality-enhancing effects of vertical integration. Foreclosure typically occurs when a vertically integrated firm restricts supply of the intermediate good with an aim to raise the final good price. The significance of such practices has been the subject of continuing debate; Riordan summarizes the notable theoretical models that have shaped the discussion. The models are roughly divided into two groups: models where vertical integration raises downstream rivals’ costs by dampening competition in the upstream market , and models where vertical integration allows upstream units to restrict the supply of the intermediate good and restore monopoly power . Most of the empirical analysis on vertical foreclosure looks directly at the effect of vertical integration on market outcomes. A general conclusion from this literature is that the effect of vertical integration varies across industries; higher final good prices due to vertical integration is found in some industries but not in others. This may be because foreclosure effects exist only in certain industries. It is also possible that in many industries, any foreclosure effects that do exist are offset by the efficiency effects of vertical integration. Useful experimental evidence on vertical foreclosure exists. Normann finds that vertically integrated players often employ strategies that raise their rivals’ costs, as in Ordover et al. . Similarly, Martin et al. demonstrate that the monopoly restoration model of Rey and Tirole is partially supported by experimental data. Thus, the experimental literature provides support for vertical foreclosure theory, not least because efficiency effects are absent by design. Rosengren and Meehan and Snyder look at the effect of vertical integration on rival profits to make inferences about vertical foreclosure.

Both papers focus on the effect of a vertical merger announcement on the stock prices of unintegrated rivals. Rosengren and Meehan do not find that vertical mergers have a significant effect on independent downstream rivals. Thus, they find no support for foreclosure theory. Christopher Snyder’s study of the British beer industry, described in Snyder , finds that an independent upstream brewery was harmed by vertical integration between rival breweries and downstream pubs. He interprets this as support for foreclosure theory. A common feature of the existing empirical work on foreclosure is that they assume exogenous changes in market structure. A defining feature of recent studies such as Hastings and Gilbert and Suzuki has been to design dataset construction and estimation methods so that the exogeneity assumption can be made plausible. Hart and Tirole’s theoretical paper contains some analysis on the effect of vertical integration on market structure formation. Essentially, the changes in profits brought about by vertical integration may induce unintegrated firms to become integrated themselves or to exit the market. Ordover et al. also investigates the possibility that vertical integration by one firm may lead another to become vertically integrated. The possibility that vertical integration can affect the market structure formation process – in other words, that vertical integration exhibits “market structure effects” – is an area that has only recently begun to receive attention from empirical economists. The leading example is Hortac¸su and Syverson . They find that in the cement and ready-made concrete industries, unintegrated upstream firms had higher exit probabilities in markets where a higher proportion of entrants were vertically integrated. This was apparently caused by higher productivity levels among vertically integrated firms. In other words, the efficiency effects of vertical integration may have led unintegrated upstream firms to exit.

The vertical entry model presented in this chapter is designed to estimate the effect of rival actions, including vertical integration, on firm payoffs. For instance, the estimated parameters can be used to calculate how an unintegrated firm’s payoff changes when a rival pair consisting of one upstream firm and one downstream firm is replaced by a vertically integrated one. In this sense, the model is closest in spirit to the event studies of Rosengren and Meehan and Snyder that look at the effect of rival vertical integration on firm value. The weakness of the Rosengren and Meehan and Snyder studies – that the impact of vertical integration on market outcomes is not directly observed – is thus shared by the current model. A major concern is that foreclosure effects and efficiency effects often affect unintegrated firms’ profits in the same manner, and thus tend to be indistinguishable . For example, if an unintegrated downstream firm’s profit decreases as a result of rival vertical integration, it could be due to a foreclosure effect, an efficiency effect, or both. Therefore, even if a significant payoff impact is found, one may not be able to conclude anything about the existence of either of these effects. The advantage of my model is that different types of payoffs can be observed. For a few of the payoff functions, the direction of foreclosure effects is different from that of efficiency effects, so that one is distinguishable from the other. For example, if we find that unintegrated upstream profits increase in response to rival vertical integration, the existence of foreclosure effects is implied. This is because efficiency effects can only have a negative impact on an unintegrated upstream firm’s payoff. Similarly, if unintegrated downstream profits increases in response to vertical integration, it must be due to the positive spillover of efficiency effects, because any foreclosure effect would affect unintegrated downstream profits negatively. Another characteristic of the vertical entry model is that, unlike in existing studies such as Hastings and Gilbert and Suzuki , one need not assume that firms’ vertical integration decisions are exogenous. In fact, entire market structures, including the vertical integration status of individual firms, are modeled as endogenous outcomes. This implies that the data requirements for the current model may, in some sense, not be as demanding as that of existing methods. There is, however, a rather stringent requirement that the dataset contain observations from multiple markets where complete vertical market structures are observed. An additional strength of the vertical entry model lies in its ability to examine how vertical integration influences market structure formation. In addition to asking what happens to an unintegrated firm’s payoff when a rival pair becomes integrated, vertical grow room design one can ask whether the payoff impact is so large that the firm’s entry decision changes. In this connection, a useful application of the model is to evaluate the effect of a policy that bans vertically integrated entry. How does such a ban affect the number of entrants in the upstream and downstream segments? While the answer is not clear a priori, the model and parameter estimates can be used to obtain one by simulation. This field has been motivated by the technical challenge of how to handle the number of rival entrants – a variable that is clearly endogenous – as a key argument of the firm’s payoff function. The earliest studies are Bresnahan and Reiss and Berry .

Building on the pioneering work of Bjorn and Vuong , their econometric models explicitly allow market structure outcomes to be equilibria of entry games. For example, Berry’s model contains an equilibrium finding algorithm that is run at each iteration of the parameter search. These early papers focus exclusively on horizontal competition among firms that produce a homogeneous good. Coefficients on the number-of-rivals variables represent rival effects; from them, information on the degree of competitiveness in the market can be inferred. More recent papers such as Mazzeo , Seim , and Orhun expand the entry model framework to allow for product and spatial differentiation. For instance, in Mazzeo’s study of motel markets, potential entrants choose between entering the low-quality segment or the high-quality one. His results provide insight not only into the degree of competition within and across different market segments, but also into the process of market structure formation. For instance, the estimated parameters are used to predict how the product-differentiated market structure changes in response to increases in population and traffic. Another group of papers uses the entry model framework to investigate the existence of complementarities in firm actions . For example, Vitorino finds that the existence of agglomeration effects allows stores to profit from co-locating inside shopping centers. The present model examines the formation of vertical market structures in which suppliers and buyers trade and compete. As in the papers on horizontal entry, the estimates provide information on the degree of competition within each vertical segment. In addition, the complementarity between upstream entry and downstream entry can be examined. Finally, and most interestingly, the model should provide evidence on the competitive role of vertically integrated firms. Do vertically integrated firms hurt upstream rivals more than they harm downstream ones? Can some firms benefit from facing a vertically integrated competitor instead of an unintegrated pair of firms? Such questions are empirical in nature, and answering them is the subject of this chapter.This section describes the process of vertical market structure formation in the generic pharmaceutical industry to motivate the econometric model. As described in Chapter 2, drug markets open up to generic competition when patents and data exclusivities that cover the drug expire. In each market, generic drug manufacturers make entry decisions a few years before the market opening date. If an upstream unit decides to enter, it develops the active pharmaceutical ingredient and submits a dossier, called the Drug Master File , to the Food and Drug Administration . A downstream entrant, on the other hand, procures the API – either from an outside supplier or from its own production – and develops the finished formulation. It then conducts bio-equivalence tests using the finished product and files an Abbreviated New Drug Application to the FDA. Two peculiar aspects of the generic entry process need to be addressed before providing a stylized description. The first is the possibility of patent challenge by generic entrants. As described at length in Chapter 2, the regulatory rules governing generic entry incentivize generic entrants to challenge the ability of originator patents to block entry, by way of a 180-day generic exclusivity awarded to the first-to-file paragraph IV ANDA applicant. The existence of such incentives pushes firms into a race to be first whenever a paragraph IV patent challenge is involved. The economics of such a race is very different from that of a conventional entry game where firms move simultaneously. For this reason, this chapter focuses only on markets that are not subject to a paragraph IV patent challenge.

Suppose also that the patent is the only one protecting a particular drug market

Then, the act of invalidation benefits not only the generic firm who made the investment, but also others who seek to enter the market. Because such public goods tend to be under supplied in a competitive market, Congress created a system to reward the first generic firm to invest in a patent challenge. The reward is given out through a complex process that I summarize here. When a generic firm files an ANDA containing a paragraph IV certification to the FDA, it must directly notify the originator , as well as the other holders of the patents being challenged, about its filing. The originator must then decide within 45 days whether or not to initiate a patent infringement suit. If the originator decides not to sue, then the FDA is allowed to approve the ANDA and the generic may enter the market. If the generic firm is the first to have filed a substantially complete ANDA containing a paragraph IV certification, it is awarded a 180-day exclusivity in the generic market. This means that the FDA is not allowed to approve any other ANDA until 180 days have passed since the first generic product’s commercial launch. If the originator decides to sue the generic entrant, then the FDA is stayed from giving final approval to the ANDA until 30 months have passed or until a court decides that the patent in question is invalid or not infringed, whichever comes sooner. The FDA may review the ANDA in the mean time, indoor vertical garden systems but it can only issue a tentative approval. Thus, the 30-month stay functions as an automatic preliminary injunction against the paragraph IV ANDA applicant. The main possible outcomes of the patent infringement suit between the originator and the paragraph IV applicant are the following: a victory for the generic entrant, a loss for the generic entrant, or a settlement between the two parties.

If the generic applicant wins the patent infringement suit, its ANDA receives final approval from the FDA once the other patents listed in the Orange Book expire. If the generic firm is the first to have filed a substantially complete paragraph IV ANDA, it obtains the right to 180-day exclusivity. The exclusivity period starts when the first to-file generic begins commercial marketing or when a court decides that the patent in question is invalid or not infringed, whichever is earlier. If the generic firm loses the infringement suit for every challenged patent, then its ANDA is not approved until expiration of those patents or until the end of the 30-month stay. Even if the firm is the first-to-file paragraph IV applicant, it is not awarded the 180-day exclusivity, because the right to exclusivity disappears with the expiration of the challenged patents . If the generic and originator firms decide to settle the patent infringement suit, the generic firm’s ANDA is approved only after the 30-month stay. If the generic firm is the first-to-file paragraph IV applicant, it becomes eligible for 180-day exclusivity, which is triggered by the generic product’s commercial launch. The right to 180-day exclusivity is given only to the first-to-file paragraph IV applicant. If the first-to-file applicant loses in patent infringement litigation or otherwise forfeits its right to 180- day exclusivity, the right disappears; it is not rolled over to the next-in-line applicant . If multiple firms file ANDAs with paragraph IV certifications on the same day, and no prior ANDA has been filed, the right to generic exclusivity is shared between those firms. The large profits available from 180-day exclusivities have made generic firms more aggressive in their patent challenges. As Grabowski and Higgins and Graham note, the number of ANDAs containing paragraph IV certifications increased rapidly after the regulatory change: the average number of paragraph IV ANDA filings per year rose from thirteen during 1992-2000 to 94 in the 2001-2008 period.

While this increase partly reflects the greater number of blockbuster drugs going generic in the latter period, observers agree that the regulatory change played a significant role . Table 2.1 presents the share of generic markets that were the subject of one or more paragraph IV ANDA filings in a sample of 128 markets that opened up during 1993-2005. As described more fully in Section 2.5, drug markets were selected for inclusion using the following criteria: the drug product contains only one API; of the set of finished formulations containing the same API, the product is the first to experience generic entry; and there is at least one generic entrant in the market. The propensity of paragraph IV challenges suddenly jumps for markets that experienced first generic entry in 1999. This reflects expectations among generic firms that the FDA would give out more 180-day exclusivities following the 1998 court decisions. The share of generic markets with paragraph IV certifications remains high – at around one-half – in the subsequent years. Grabowski comments that the granting of more 180-day exclusivities has, in some cases, turned the generic entry process into a race to be first. Higgins and Graham note that as a result of more aggressive efforts by generic entrants, ANDA filings have come to take place earlier in a drug’s lifecycle. Indeed, there have been many markets where multiple generic firms filed their paragraph IV ANDAs exactly four years after the approval of the originator’s NDA – that is, on the earliest date allowed by the FDA . Also, Grabowski and Kyle show that drug markets with higher revenue tend to experience generic entry sooner, partly because they tend to be more heavily targeted for paragraph IV challenges. Interestingly, while ANDAs filings are being made increasingly early, Grabowski and Kyle find no evidence that generic product launches are occurring earlier in the drug’s life cycle in markets that opened up more recently.

This may be because the Hatch-Waxman system has had an unintended side effect. As reported by the Federal Trade Commission and Bulow , the system has been used by some originators, somewhat paradoxically, to delay generic entry through the use of so-called “pay-to-delay” settlements. Given that the existence of a patent challenge turns the generic entry process into a race to be first, econometric analysis of generic firm behavior would ideally be based on a model that takes the timing of entry into account. Unfortunately, the data that I use do not contain accurate information on the timing of entry by each generic firm. Also, I do not observe whether or not each ANDA filing contains a paragraph IV certification because this information is not disclosed by the FDA. On the other hand, the FDA publishes a list of drug markets that were the subject of one or more ANDAs containing a paragraph IV certification. Therefore, it is possible to distinguish between paragraph IV markets and non-paragraph IV markets, and to see if firm behavior differs across the two groups. Our interest in this study is in seeing if paragraph IV patent challenges are associated with generic firms’ vertical integration decisions. How might such an association arise? As I argue in Section 2.3, when generic entry involves a race to be first, investments made by upstream API manufacturers tend to become specific to a particular downstream buyer. If contracts between unintegrated upstream suppliers and downstream buyers are incomplete and payoffs are determined through ex post bargaining, plant drying rack this increase in relationship specificity could enhance the role of vertical integration as a way to facilitate investments. In the empirical analysis, I examine whether the occurrence of paragraph IV certification at the market level is associated with higher incidence of vertical integration at the firm level.Before turning to the formal analysis, let us examine the pattern of vertical integration in the generics industry. Figure 2.1 shows how the prevalence of vertical integration at the market level has changed over time. It is based on the sample of 128 markets that opened up between 1993 and 2005. It can be seen that the average number of downstream entrants per market has remained stable at around five. On the other hand, the share of those downstream entrants that are vertically integrated has increased over time. For markets that opened up in the 1993-2000 period, the average share of vertically integrated entrants, as a percentage of the number of downstream entrants, was 8.1 percent. In 2001-2005, the figure rose to 24.1 percent and the difference between the sub-periods is highly significant . The incidence of vertical integration has similarly risen over time. In each of the years from 1993 to 2000, 24.0 percent of the sample markets opening up each year, on average, had one or more vertically integrated entrants. For the years 2001-2005, the average share of markets having any vertically integrated entry was 64.6 percent . An interesting fact about the US generic pharmaceutical industry is that it started off as being vertically separated. When the industry began its growth in the 1980s, finished formulation manufacturers procured most of their API requirements from outside suppliers located in Italy, Israel, and other foreign countries.

This was mainly due to differences in patent protection across countries: while strong patent protection in the US made it difficult for domestic companies to develop APIs before the expiration of originator patents, the weak patent regimes in Italy and other countries at the time allowed firms located there to develop generic APIs early . In addition to these historical origins, the nature of the generics business also made vertical separation a natural outcome. Different downstream manufacturers of generic drugs produce near identical products, because, by definition, they are all bio-equivalent to the original product. Therefore, the APIs manufactured by different upstream firms are also expected to be homogeneous. This implies that in general, investments into API development by an upstream manufacturer are not specific to a particular downstream user. In other words, the investment facilitation effects of vertical integration are unlikely to be important in this industry under normal circumstances. This is analogous to Hart and Tirole’s observation that the efficiency benefits of vertical integration were unlikely to have been strong in the cement and ready-mixed concrete industries during the 1960s when the vertical merger wave took place. Nevertheless, as Figure 2.1 demonstrates, vertical integration has become more prevalent over time in the generics industry. Several possible reasons for this can be found from industry reports. One is that early development and procurement of APIs has become more important to the profitability of downstream manufacturers in recent years, particularly in markets characterized by paragraph IV patent challenges. For example, the annual report of Teva, the industry’s largest firm, describes the motive for vertical integration as follows: “to provide us with early access to high quality active pharmaceutical ingredients and improve our profitability, in addition to further enhancing our R&D capabilities.” . Karwal mentions that “having access to a secure source of API can make a significant difference, particularly relating to difficult-to-develop API, when pursuing a potential Paragraph IV opportunity, and to secure sufficient quantities for development” . Similarly, Burck notes that “Access to API and control of the development and manufacturing process to support patent challenges has often been cited as a reason for backward integration” . These comments suggest that vertical integration allows downstream manufacturers to obtain APIs sooner than they otherwise would, and that this aids them in attaining first-to-file status in paragraph IV markets. This would partly explain why the increased prevalence in vertical integration appears to have followed closely behind the increase in paragraph IV patent challenges. A second possible cause of increased vertical integration pertains to bandwagon effects. A former purchasing executive at Sandoz, one of the largest firms, mentions that firms vertically integrate to “avoid sourcing API from a competitor” . Karwal points out that “Many key API suppliers, especially from India, China and Eastern Europe, are moving up the value chain and decreasing their supply activities, becoming direct competitors in finished form generics” . He suggests that this is one of the factors behind increased backward integration by established downstream manufacturers. In the mid-2000s, traditionally unintegrated US firms in the downstream segment began acquiring API manufacturing assets. Examples include the acquisition of Indian API manufacturers by Mylan and Watson, both large US finished formulation companies. It is important that these actions, by two of the main players of the industry, took place after vertically integrated entry became common.

Agricultural irrigation tail water from flood and furrow irrigation constituted the main water source for all wetlands

In addition to concerns about food safety, microbial pathogens are considered to be among the leading causes of water quality impairment in California agricultural watersheds . Within a watershed, pathogenic bacteria and protozoa from humans, livestock, wildlife, and pets can be found in runoff and can contaminate surface water bodies . Non-point sources of pollution have become the main sources of microbial pollution in waterways, with agricultural activities, including manure application to fields, confined animal operations, pastures, and rangeland grazing, being the largest contributors . Constructed and restored wetlands have been among the few water management options proposed as being available to growers to filter and improve the quality of water in agricultural runoff that contains a wide range of contaminants . Specifically, constructed wetlands have been shown to be highly effective at removing pathogens from water . However, wetlands may also provide habitat for wildlife, including birds, livestock, deer, pigs, rodents, and amphibians, and they may in turn vector pathogens that cause human disease. These animals deposit feces and urine within the wetland, an effect that has the potential to negate any benefit from pathogen removal caused by wetland filtering . After past outbreaks of food borne illness caused by E. coli 0157:H7 borne on lettuce and spinach grown in California, some food safety guidelines have encouraged growers to reduce the presence of wildlife by minimizing non-crop vegetation, including wetlands, that could otherwise attract wildlife to farm fields growing fresh produce . In this situation, food safety guidelines may be at odds with water quality improvement measures. Many constructed and restored wetlands in California have been built with support from the USDA-NRCS through the Environmental Quality Incentives Program and the Wetland Reserve Program . Under these programs, 4×8 botanicare tray most wetland systems were initially developed to mitigate the loss of wetlands and improve wildlife habitat. A key element of the design of these systems is that they receive agricultural runoff as input flows intended to maintain the wetland’s saturated conditions .

In addition to increasing wildlife habitat, the observed water quality improvements linked with these types of wetlands have made them an attractive “best management practice” for irrigated agriculture . Our purpose in writing this publication is to show how wetlands may be used to improve water quality in agricultural settings where pathogens are a matter of concern. In addition, we will discuss wetland design and management considerations that have the potential to maximize pathogen removal and minimize microbial contamination. The following case study highlights the effectiveness of wetlands as a tool to improve water quality and demonstrates the importance of specific design characteristics. A water quality assessment of seven constructed or restored surface flow-through wetlands was conducted across the Central Valley of California. Wetlands differed in such parameters as size, age, catchment area, vegetation type and coverage, and hydrologic residence time . W-1 through W-4, located in the San Joaquin Valley and discharging into the San Joaquin River , were continuous flow wetlands. W-5 through W-7, situated in the Sacramento Valley and discharging into the Sacramento River , were flood-pulse wetlands with a water management regime consisting of flood pulses every 2 to 3 weeks, followed by drainage for 3 to 4 days prior to the next flood pulse. W-2 and W-3 shared the same input water source, and the same was the case for W-5, W-6, and W-7. Several water quality parameters were measured at input and output locations during the growing season to evaluate the systems’ ability to improve water quality. Both concentration and load are important considerations when assessing water quality constituents. Concentration represents the mass, weight, or volume of a constituent relative to the total volume of water. Load represents the cumulative mass, weight, or volume of a constituent delivered to some location.

The flow-through wetlands were most effective at reducing total nitrogen , total suspended solids , and E. coli loads , and were moderately effective at reducing total phosphorus loads. In many instances, the flood-pulse wetlands were actually a source of contaminants, as indicated in table 2 by the negative numbers they show for removal efficiency. E. coli load in outflows was significantly lower than the inflow load at all flow-through wetlands , while the flood-pulse wetlands showed significant increases in E. coli : decreases of 80 to 95% as opposed to increases in total E. coli loads, respectively. The differences in contaminant removal for flow-through versus flood-pulse wetlands can be attributed to two factors. First, the input water for the flood-pulse systems was very clean, so any introduced contaminants were readily detectable. The average E. coli concentration for input water was 62 cfu 100 ml−1 in the flood-pulse wetlands, compared to over 200 cfu 100 ml−1 in the flow-through wetlands. Second, the overly long hydrologic residence times of flood pulse systems can allow contaminants to become more concentrated through the processes of water evaporation, leaching of nutrients from soils and organic matter, and introduction of nutrients and contaminants from feces and urine deposited by wildlife that inhabit the wetlands. Enterococci and E. coli are standard federal- and state regulated constituents used as indicators of fecal contamination in water. In the flow-through wetlands , approximately 47 percent of water samples collected from irrigation return flows exceeded the EPA recreational contact water standard for E. coli of 126 cfu 100 ml−1 . In contrast, E. coli concentration in wetland outflows ranged from 0 to 300 cfu 100 ml−1. Following wetland treatment, 93 percent of wetland outflows met the California water quality standard for E. coli concentration . For enterococci, 100 percent of the input water samples exceeded the water quality standard of 33 cfu 100 ml−1.

Despite exceeding the water quality standard, the bacteria levels found here are very low when compared to other contaminated water sources, such as wastewater . Although enterococci removal efficiencies ranged from 86 percent to 94 percent , only 30 percent of the outflow enterococci concentrations met water quality standards . Results from this study indicate that by passing irrigation tail water through wetlands, a grower can significantly reduce the water’s pathogen concentration and load, as well as other water quality contaminants common to agricultural settings. Some water quality standards may never be met with wetland filtering alone, especially where the standards require extremely low values, as is the case for enterococci in irrigation water used on farms that grow produce that is intended to be consumed raw. Wetland design and management need to be considered prior to construction and throughout the life of the system. In many cases, the natural mechanisms that promote contaminant removal or retention can be manipulated through careful design, management of hydrology, and maintenance of appropriate vegetation. Natural mechanisms for reducing bacteria pathogens are not fully understood and have received only limited study in irrigated agriculture. Wetlands are known to act as bio-filters through a combination of physical , chemical , and biological factors , all of which contribute to the reduction of bacteria numbers . Where input water has a relatively low concentration , wetland background levels are so low that water passing through the wetland may actually end up with increased pathogen concentrations . As high-energy input flows disperse across the wetland, the water’s velocity decreases, and particles that had been suspended in the water settle to the bottom. The energy needed to support suspended particles in the water flow dissipates as the cross-sectional area of the wetland flow path increases, flood tables for greenhouse and vegetation reduces the water’s turbulence and velocity. The rate of sedimentation is governed by particle size, particle density, water velocity and turbulence, salinity, temperature, and wetland depth. Larger pathogens tend to settle more quickly than smaller ones. The actual removal of pathogens by means of sedimentation depends on whether the pathogens are free-floating or are attached to particles. Pathogens can be attached to suspended particles such as sand, silt, clay, or organic particulates. Microbial contaminants associated with particles, especially dense, inorganic soil particles, settle out in wetlands sooner than those in the free-floating form. Studies have shown that the rate of pathogen removal is greater in wetlands where the input waters have a high sediment load . Some wetland designs are more prone to encourage wave activity, which prevents sedimentation and encourages re-suspension of settled particulates . High wind velocities promote wave activity. Large, open-water designs are more prone to water turbulence because wind velocity increases over a large, smooth surface. Wetland vegetation can help minimize water turbulence and particle re-suspension. For example, trees planted as wind barriers surrounding the wetland decrease the amount of wind on the wetland. Emergent vegetation within the wetland can anchor sediment with its roots and can dampen the velocity of wind moving across the water surface. Dendritic wetland designs, which consist of a sinuous network of water-filled channels and small, vegetated uplands, can help reduce water turbulence associated with high winds .Vegetative cover has been shown to decrease sediment re-suspension. For example, Braskerud found that an increase in vegetative cover from less than 20 percent up to 50 percent reduced the rate of sediment re-suspension from 40 percent down to near zero. Wetland depth may also have an indirect effect on sediment retention.

The water should be deep enough to mitigate the effect of wind velocity on the underlying soil surface, but if the water is too deep, vegetation will not be able to establish and a significant increase in re-suspension of sediment will result. Water depths between 10 and 20 inches optimize conditions for plant establishment, decreased water velocity, well-anchored soil, and a short distance for particles to fall before they can settle . An excess of vegetation can significantly reduce a wetland’s capacity to retain E. coli. Maximum removal of E. coli occurs under high solar radiation and high temperature conditions , and vegetation provides shading that can greatly reduce both UV radiation and water temperatures. While vegetation can provide favorable attachment sites for E. coli, a dense foliage canopy can hinder the free exchange of oxygen between the wetland and the atmosphere. This vegetation induced barrier to free exchange of oxygen limits dissolved oxygen levels, and that in turn reduces predaceous zooplankton, further decreasing removal of microbial pathogens from the wetland environment . The plants’ uptake of pollutants, including metals and nutrients, is an important mechanism, but is not really considered a removal mechanism unless the vegetation is harvested and physically removed from the wetland. Wetland vegetation also increases the surface area of the substrate for microbial attachment and the biofilm communities that are responsible for many contaminant transformation processes. Shading from vegetation also helps reduce algae growth. However, certain types of vegetation can attract wildlife such as migrating waterfowl, which may then become a source of additional pathogens. Vegetation that serves as a food source or as roosting or nesting habitat for waterfowl may need to be reduced in some settings. Among other important considerations for vegetation coverage in wetlands, one must include total biomass and depth features. Vegetation should provide enough biomass for nutrient uptake and adsorptive surface area purposes, but must also be managed to allow sufficient light penetration to enable natural photo degradative processes and prevent accumulation of excessive plant residues, which would prevent the export of dissolved organic carbon. One way to promote this balance is to create areas of deeper water intermixed with the shallower areas. In an agricultural setting, it may be hard to establish plantings of native species within wetlands due to the large seed bank of exotic species that may be present in input waters . You can also manage the type and amount of vegetation by manipulating the timing and duration of periods of standing water in the system. In extreme instances, you can actually harvest excess biomass. In addition to managing vegetation and water depth to maximize sedimentation and pathogen photodegradation, growers can also manipulate hydrology to maximize the removal of microbial pollutants in wetlands. The importance of hydrologic residence time is apparent when you recognize that a longer HRT increases the exposure of bacteria to any removal processes such as sedimentation, adsorption, predation, impact of toxins from microorganisms or plants, and degradation by UV radiation . E. coli concentrations have been shown to increase in runoff from irrigated pastureland when the volume of runoff is increased .

One of the most critical factors affecting crop growth rate is the air flow velocity over plants

Stavrakakis et al. investigated the capability of three Reynolds Averaged Navier-Stokes models to simulate natural ventilation in buildings. Papakonstantinou et al. presented a mathematical model for turbulent flow and accordingly developed a 3- D numerical code to compute velocity and temperature fields in buildings. A novel gas-liquid mass transfer CFD model was developed by Li et al. to simulate the absorption of CO2 in a microporous microchannel reactor. Yuan et al. visualized the air paths and thermal leakages near a complex geometry using a transient thermal model with buoyancy-driven convection, conduction and thermal radiation heat transfer and flow field near a vehicle structure . In the context of agriculture, researchers have extensively employed CFD analysis for study of ventilation, air flow, and microclimate in indoor systems. Zhang et al. developed a CFD simulation to assess single-phase turbulent air stream in an indoor plant factory system and achieved the highest level of flow uniformity with two perforated tubes. Karadimou and Markatos developed a transient two-phase model to study particle distribution in the indoor environment using Large Eddy Simulation method. Baek et al. used CFD analysis to study various combinations of air conditioners and fans to improve growth rate in a plant factory . More recently, Niam et al. performed numerical investigation and determined the optimum position of air conditioners in a small vertical plant factory is over the top. In addition, a variety of mathematical techniques are proposed to provide sub-model for investigating photosynthesis. According to Boulard et al., tall canopies can induce a stronger cooling of the interior air by using a CFD model to study the water vapor, temperature, and CO2 distribution in a Venlo-type semi-closed glass greenhouse. Despite the fact that photosynthesis plays an integral role in distribution of species and uniformity along cultivation trays, rolling grow trays this issue has not been well addressed. Although numerous research works have been done to investigate the turbulent flow in enclosures and buildings, this study is the first to numerically investigate the transport phenomena considering the product generation and reactant consumption through photosynthesis and plants transpiration with CFD simulations for IVFS-based studies.

Furthermore, a newly proposed objective uniformity parameter is defined to quantify velocity uniformity for individual cultivation trays. Moreover, numerical simulations are performed to simulate and optimize fluid flow and heat transfer in an IVFS for eight distinct placements of flow inlets and outlets in this study. Accordingly, the effects of each case on uniformity, relative humidity, temperature, and carbon dioxide concentration are discussed in detail. Finally, an overall efficiency parameter is defined to provide a holistic comparison of all parameters and their uniformity of each case.In this study, three-dimensional modeling of conjugated fluid flow and heat transfer is performed to simulate the turbulent flow inside a culture room having four towers for hydroponic lettuce growth. Assuming that the four towers are symmetric, a quarter of the room with four cultivation trays is selected as the computational domain, as illustrated in Fig. 1a. Symmetry boundaries are set at the middle of the length and width of the room. The effect of LED lights on heat transfer is considered through constant heat flux boundary conditions at the bottom surface of each tray as shown in Fig. 1b. Lastly, the species transfer due to photosynthesis are occurring only in the exchange zone, which is illustrated in Fig. 1c. To study the impact of air inlet/exit locations on characteristics of air flow, four square areas, denoted as A, B, C, and D in Fig. 1a, are considered to be inlet, exit, or wall. To perform a systematic study, Table 1 presents the location of inlet and exit for all eight cases studied. With the aim of comparing all of the proposed designs, case AB is selected to be the baseline. A fluid stream with horizontal speed ranging from 0.3 to 0.5 m s−1 can escalate the species exchange between the flow and plant leaves resulting in enhancement of photosynthesis. In indoor farming systems, the flow velocity can be controlled well using ventilation fans for more efficient plant growth.

However, heterogeneous distribution of feeding air over plant trays can cause undesirable non-uniformity in crop production, which should be avoided. Therefore, it is important to study the effect of inlet-outlet location and flow rate on the flow patterns throughout the culture room. Herein, the most favorable condition is defined as the condition at which the flow velocity above all trays is equal to the optimum speed Uo, which is set to be 0.4 m s−1. The objective uniformity, OU, defined in Eq. is used to assess the overall flow conditions. The OU for all eight cases as a function of mass flow rate are summarized in Fig. 5. Since the inlet/exit area and air density remain the same, the mass flow rate is directly proportional to flow velocity. In addition, the target flow velocity over the plants is set to be 0.4 m s−1. Therefore, a general trend of OU first increases and then decreases when increasing the overall mass flow rate. Depending on the design, the peak of OU occurs at different mass flow rate for each case. Another general trend can be observed that the peak of OU occurs at a lower mass flow rate if the inlet is located at the top due to buoyancy force. This can be clearly demonstrated by cases AB and BA or AD and DA . Therefore, there exists a different optimal inlet/exit design for each mass flow rate condition. As can be seen from Fig. 5, the maximum OU at flow rates of 0.2, 0.3, 0.4 and 0.5 kg s−1 is observed for configurations AD, BC, BA, and DA, respectively. Therefore, this simulation model can identify optimal flow configuration at a specific mass flow rate condition. Since OU quantifies the deviation of average velocity of each tray from the designed velocity, a higher OU value indicates that the crops will have better and more uniform photosynthesis. It can be observed from Fig. 5 that the maximum OU obtained for all conditions is case BC at a flow rate of 0.3 kg s−1. To develop a better understanding, the two-dimensional velocity and vorticity distributions in the x-y plane along the middle of the z-direction for all eight cases at a mass flow rate of 0.3 kg s−1 are plotted in Figs. 6 and 7.

As can be observed from Figs. 6 to 7, the OU is highest for case BC due to its uniform velocity and vorticity distributions between trays. This can be attributed to the position of inlet/exit location with respect to the tray orientation. For case BC, the inlet flow is parallel to the longitudinal direction of the tray and the exit is along the transverse direction . This design allows the flow to travel through the long side of the tray uninterrupted and then form a helical flow orientation near the end of the tray. This spiral formation of flow induces a more uniform and regular flow in the room. This also explains why case AD has very high OU. Similar spiral formation can also be observed when the inlet flow is parallel to the transverse direction of the tray and the exit is along the longitudinal direction , like case DA. However, since the inlet flow is along the short side of the tray, the benefit is not as great and requires much higher inlet mass flow rate. On the other hand, for cases where the inlet and exit are located on the same wall, such as AB or CD, the air flow only has strong mixing effect along the inlet/exit direction which, in turn, reduces the overall flow uniformity. Besides the velocity distribution, horticulture trays the effect of temperature is also a critical parameter for determining convective flow. Fig. 8 shows the two-dimensional temperature distributions in the x-y plane along the middle of the z-direction for all eight cases at a mass flow rate of 0.3 kg s−1. In our analysis, the temperature of the inlet flow is lower than that of the exit flow due to the heat generated from the LED light. For case BC, the inlet is located near the bottom and the exit is near the top. Due to the density difference, the exit warm stream tends to flow up. This allows the flow to reach the topmost tray more easily and, therefore, achieves more uniform temperature distribution among all trays. Combining the inlet flow along the long side of the tray, the helical flow effect, and the buoyancy, case BC is able to reach the maximum OU of 91.7%. Fig. 9 summarized the velocity and temperature contours for case BC at an inlet mass flow rate of 0.3 kg s−1. The velocity pro- files in Fig. 9a clearly show the spiral effect above each cultivation tray and the local velocity is close to the optimal speed of 0.4 m s−1. In addition, the temperature shows an increasing trend from bottom to top as the flow helically passing through the crops and moving towards the outlet.The distributions of temperature and gas species, such as water vapor and CO2, play an integral role in photosynthesis which, in turn, influences the quality of plant and its growth. Therefore, maintaining these critical parameters in a reasonable range to ensure reliable and efficient production is essential to environmental control of an IVFS. Evaluating the distribution of these parameters can also provide the effectiveness of inlet/exit location. It should be noted that the parameter OU provides an overall assessment of the air flow velocity over planting trays. An optimal design is to achieve desired local temperature and species distribution while maintaining high OU values in an IVFS. In the following discussion, the four cases with highest values of OU at their corresponding mass flow rates are studied and compared to the baseline case AB.Since CO2 is a reactant of photosynthesis, increasing CO2 concentration usually leads to enhancement of crop production. Reports show that increasing the CO2 concentration from the atmospheric average of 400 ppm to 1500 ppm can increase the yield by as much as 30%. In this IVFS analysis, the CO2 level of the inlet mass flow rate is increased by a CO2 generator to be 1000 ppm . Since the consumption rate of CO2 through the exchange zones is fixed, higher overall average CO2 concentration through the system is desirable. Fig. 10 shows the comparison of the average CO2 concentration between the highest OU cases and the baseline case AB at different inlet mass flow rate. A few general trends of CO2 concentration can be observed from Fig. 10. First, the CO2 concentration increases with inlet flow rate due to increasing supply of CO2 molecules. In addition, tray 1 has the highest CO2 concentration because most of the cold fresh inlet air dwells near the bottom of the IVFS due to the buoyancy effect. In contrast, tray 3 has the lowest CO2 concentration because the fresh inlet air has the highest flow resistance to reach tray 3due to the combination of sharp turns and buoyancy effect. This is particularly true at low inlet flow rates and when the inlet is located on the top, which lead to low flow circulation as cold inlet air flows downward directly. As a result, BC, BA, and DA at 0.3, 0.4, and 0.5 kg s−1, respectively, have relatively high CO2 concentrations. Even though the baseline case AB at 0.5 kg s−1 has the highest CO2 concentration, its OU is too low to be considered a good design. Temperature is also a critical parameter to control and monitor because it directly affects both relative humidity and plant growth. The temperature distribution in the system depends on the inlet/exit location, inlet mass flow rate, and amount of heat. Since the inlet temperature and heat flux conditions are fixed, the exit temperature increases with decreasing inlet mass flow rate. Fig. 11 shows a comparison of the average temperatures of the higher OU cases and the baseline case AB at different inlet mass Fig. 12. Comparison of the average RH over each tray between the best OU cases and the baseline case at each inlet mass flow rate condition. flow rates.

Object/mouse side placement was counterbalanced between trials

Sparse metal bars allowed for paw access to the smooth acrylic floor, whereas dense-wire mesh did not. For high-fat diet CPP, animals were given one pellet of standard chow and an isocaloric amount of high-fat food . As high-fat pellets have a different color and consistency, they were also given to home cages the day before pre-conditioning to prevent neophobia. Statistical analyses. Results are expressed as means ± SEM. Significance was determined using two-tailed Student’s t-test, One-way or Two-way analysis of variance with Tukey’s post-hoc test and differences were considered significant if P<0.05. Analyses were conducted using GraphPad Prism .The use of marijuana is reinforced through activation of the mesolimbic reward circuit . In a related but distinct modulatory process, the neurotransmitter system mediating the effects of marijuana in the brain – the endocannabinoid system – also facilitates the reward of other stimuli, such as food or drugs of abuse . The endocannabinoid system has three main components: two lipid-derived local messengers – 2- arachidonoyl-sn-glycerol and anandamide , enzymes and transporters that mediate their formation and elimination, and receptors that are activated by endocannabinoids and regulate neuronal activity . Genetic and pharmacological studies have unveiled key roles of the CB1 receptor in the modulation of reward signaling. Less is known about the functions served by individual endocannabinoid messengers. In particular, an emerging question is whether endocannabinoids might also regulate the reward of social interactions. We recently demonstrated that anandamide regulates social reward via cooperative signaling with oxytocin, 4×4 grow table a neuropeptide that is crucial for social bonding . The role of 2-AG remains unknown, however. One way to assess the specific contribution of individual endocannabinoids is to manipulate the enzymes responsible for their formation and deactivation.

For example, pharmacological inhibition or genetic deletion of the enzyme that hydrolyzes anandamide, fatty acid amide hydrolase , markedly increases anandamide activity at CB1 cannabinoid receptors . Analogous strategies exist for 2-AG. Indeed, MGL-/- mice, in which the 2-AG-hydrolyzing enzyme monoacylglycerol lipase is deleted, andthus 2-AG levels are elevated, as well as DGL-α-/- mice, in which the 2-AG-synthesizing enzyme diacylglycerol lipase is deleted, and thus 2-AG levels are very low . However, the effects of these radical modifications are often difficult to interpret because of the emergence of profound compensatory changes in the brain, such as desensitization of CB1 receptors and elevation in anandamide and arachidonic acid levels . We have recently generated a novel transgenic mouse model – MGL-Tg mice – which selectively overexpress MGL in forebrain neurons under the control of the CaMKIIα promoter . These mutant mice display a forebrain-selective accrual in MGL hydrolyzing activity and a 50-75% decrement in 2-AG content. This reduction in 2-AG is not accompanied by overt changes in levels of other endocannabinoid-related lipids , cannabinoid receptors, or other endocannabinoid-related proteins . To investigate the role of 2-AG in reward-related behaviors, we tested MGL-Tg mice in conditioned place preference paradigms for high-fat food, social, or cocaine stimuli. Based on a rich theoretical framework, CPP assesses the rewarding value of test stimuli by pairing them with neutral environments . Because less is known about endocannabinoid signaling and social behavior, we also investigated the effects of social interaction on 2-AG signaling in reward-related regions of the brain. We hypothesized that MGL-Tg mice are deficient in reward signaling and that rewarding social stimuli drive 2-AG signaling in normal mice.Socially conditioned place preference . Procedures were previously described . Briefly, mice were placed in a two-chambered acrylic box .

A 30-min preconditioning session was used to establish baseline neutral preference to two types of autoclaved, novel bedding . These differed in texture and shade . Individual mice with strong baseline preference for either type of bedding were excluded – typically, those that spent more than 1.5x time on one bedding over the other. The next day, animals were randomly assigned to a social cage with cage-mates to be conditioned to one type of novel bedding for 24 h , then moved to an isolated cage with the other type of bedding for 24 h. On the next day, animals were tested alone for 30 min in the two-chambered box to determine post-conditioning preference for either type of bedding. Bedding volumes were 300 mL in each side of the two-chambered box and 550 mL in the home-cage. Familiar animals from the same cage were tested concurrently in four adjacent, opaque CPP boxes. Between trials, boxes were thoroughly cleaned with SCOE 10X odor eliminator . Scoring was automated using a validated image analysis script in ImageJ .Cocaine and high-fat diet CPP. Procedures were previously described . Briefly, these paradigms were similar to social CPP, including unbiased and counterbalanced design, cleaning and habituation, exclusion criteria, and scoring, except for the following main differences, which followed reported methods . Mice were conditioned and tested in a two-chambered acrylic box . Pre- and post conditioning tests allowed free access to both chambers and each had durations of 15 min and 20 min . For conditioning, animals underwent 30-min sessions alternating each day between saline/cocaine or standard chow pellet/high-fat pellet . The two chambers offered conditioning environments that differed in floor texture and wall pattern – sparse metal bars on the floor and solid black walls vs. dense-wire-mesh floors and striped walls. Sparse metal bars allowed for paw access to the smooth acrylic floor, whereas dense-wire mesh did not. For high-fat diet CPP, animals were given one pellet of standard chow and an isocaloric amount of high-fat food . As high-fat pellets have a different color and consistency, they were also given to home cages the day before pre-conditioning to prevent neophobia.Intake of high-fat pellets was recorded in free feeding mice using an automated monitoring system , as described previously . Food intake was measured for two days, and the average of intake was normalized to the body weight at the start of feeding.The test was conducted according to established methods .

To mimic the conditions of the social CPP task, mice were first isolated for 30 min and tested in dim-light conditions . Pairs of mice were tested in an open field arena for 5 min. Scoring for social interaction time included behaviors such as sniffing, following, grooming, mounting and crawling over or under. Passive interaction, in which mice were in close proximity but without these interactions, was not included in the scoring.The procedure was previously described , which was based on an established protocol . Briefly, test mice were first habituated to an empty three-chambered acrylic box , including to the center chamber for 10 min, cannabis drying system and then to all chambers for 10 additional min. Mice were then tested for 10 min. Subjects were offered a choice between a novel object and a novel mouse in opposing side chambers. The novel object was an empty inverted pencil cup and the novel social stimulus mouse was a sex, age and weight-matched 129/SvImJ mouse. These mice were used because they are relatively inert. They were trained to prevent erratic or aggressive behaviors, such as biting the cup. Weighted cups were placed on top of the pencil cups to prevent climbing. Low lighting was used. The apparatus was thoroughly cleaned with SCOE 10X odor eliminator between trials to preclude olfactory confounders. Chamber time scoring was automated using image analysis.Sniffing time was scored by trained assistants who were unaware of treatment conditions. Outliers in inactivity or side preference were excluded.The procedure was previously described . Briefly, whole brains were collected and flash-frozen in isopentane at -50 to -60 °C. Frozen brains were transferred to -20°C in a cryostat and kept for 1 h to attain local temperature. The brain was then cut to the desired coronal depth and micropunches from bilateral regions of interest were collected using a 1×1.5-mm puncher . The micropunches weighed approximately 1.75 mg. A reference micropunch was taken to normalize each punch to the brain’s weight. Bilateral punches were combined for lipid analyses.Procedures were previously described . Briefly, tissue samples were homogenized in methanol containing internal standards for H2 -anandamide , H2 -oleoylethanolamide and 2 H8-2-arachidonoyl-sn-glycerol . Lipids were separated by a modified Folch-Pi method using chloroform/methanol/water and open-bed silica column chromatography. For LC/MS analyses, we used an 1100 liquid chromatography system coupled to a 1946D-mass spectrometer detector equipped with an electrospray ionization interface . The column was a ZORBAX Eclipse XDB-C18 . We used a gradient elution method as follows: solvent A consisted of water with 0.1% formic acid, and Solvent B consisted of acetonitrile with 0.1% formic acid. The separation method used a flow rate of 0.3 mL/min. The gradient was 65% B for15 min, then increased to 100% B in 1 min and kept at 100% B for 14 min. The column temperature was 15°C. Under these conditions, Na+ adducts of anandamide/H2 -anandamide had retention times of 6.9/6.8 min and m/z of 348/352, OEA/H2 -OEA had Rt 12.7/12.6 min and m/z 326/330, and 2-AG/2 H8-2-AG had Rt 12.4/12.0 min and m/z 401/409. An isotopedilution method was used for quantification.MGL-Tg mice eat less chow than do their wild-type littermates . The food intake phenotype, however, does not dissociate the effects of 2-AG signaling on metabolic and reinforcement processes.

Furthermore, altered feeding can be interpreted as either decreased or increased reward to the food stimulus. To isolate the effects of reduced 2-AG signaling on reward, we tested MGL-Tg mice and their wild-type littermates in a CPP task for high-fat food. In a standard CPP box, mice were conditioned for 30-min sessions to either standard chow or isocaloric high-fat food for 6 sessions each, alternating over 12 days total . In WT mice, we found that this conditioning protocol was sufficient to elicit a preference for the high-fat-paired chamber during post-conditioning testing. Animals spent 137 seconds more in the high-fat chamber compared to the standard-chow chamber . In contrast, MGL-Tg mice did not develop a preference for either chamber . This result suggests that 2-AG signaling is involved in conditioned reward processes of high-fat food. We then asked whether this role for 2-AG signaling could translate to the reward produced by social interaction . We conditioned mice for 24 h with cage-mates in their home-cage to one type of bedding, then we conditioned them for 24 h isolated to another bedding . In the post conditioning test in a standard CPP box, we found that this conditioning was sufficient to elicit a preference in WT mice for the social bedding . In contrast, MGL-Tg mice did not develop a preference for either bedding . Together with the high-fat-food CPP results, these results suggest that 2-AG signaling may underlie aspects of reward processes common to both natural stimuli.The lack of CPP can be attributed to impairments in the generation and processing of the reward, the consolidation of the memory for the reward, or a combination of these processes. To evaluate whether high-fat food stimuli are generated and processed properly, we measured initial intake of high-fat pellets over 2 days. MGL-Tg mice show a 16% reduction in normalized intake compared to WT littermates over this period . The combined phenotype of MGL-Tg mice showing a lack of CPP and decreased intake suggests that 2-AG plays a role in the generation and processing of high-fat food reward. Strict interpretation of these results, however, may be complicated by the role of 2-AG in energy metabolism . For the same reason, we also examined the direct social activity and the social approach interest of MGL-Tg mice using the social interaction test and the three chambered social approach test , respectively. These tests differ in two key ways: the social interaction test evaluates interactions that are reciprocal and direct, whereas the social approach test measures approach activity to a stimulus mouse that is sequestered in an inverted wire cup; the social interaction test uses familiar cage-mates, whereas the social approach test uses a novel mouse as a stimulus. In the social interaction test, we observed that MGL-Tg mice trend toward less interaction time, but this result was not significant . In the social approach test, we found that both MGL-Tg and WT mice preferred the social chamber over the object chamber and sniffed the stimulus mouse more than the object . MGL-Tg mice were similar to WT mice in the amount of time spent in the social chamber and sniffing the stimulus mouse .

Federal and state regulations for anticoagulant rodenticide usage are specific for both generations

In addition, there are stark differences for acute LD50 doses among genera, where minute amounts of brodifacoum bait caused death in domestic canids but domestic felids required doses 5 to 40 times higher . The same variability seen in both mustelids and other carnivores suggests that predicting clinical thresholds for fishers would be pre-mature. Furthermore, AR exposed fishers had an average of 1.6 AR types within their systems, and possible interaction effects from a combination of 2 or more AR compounds within a fisher and other species are entirely unknown.Spatial analyses did not reveal any obvious point sources of AR exposure. Instead, these analyses suggested that exposure is widespread across the landscape. Previous studies expected that exposure to AR compounds would be clustered near areas of human activity or in habitations and that exposure would not be common outside of these areas. Incongruously, data from this study refuted this hypothesis thus making the finding even more significant. Furthermore, these exposures occurred within a species that is not closely affiliated with urban, peri-urban or agricultural settings in which second-generation ARs typically are. Before the June 2011 Environmental Protection Agency regulations, second generation class ARs could be purchased at local retailers, vertical farming system with recommendations for placement in weather- and tamperresistant bait containers no more than 50 feet from any building. However, since June 2011, second generation ARs have not been available to consumers at retail, but only at agricultural stores with additional form and weight restrictions.

These newly passed regulations are aimed at further restriction of irresponsible and illegal use of ARs. However, we would have expected that with either pre- or post-June 2011 regulations, second generation AR exposed fishers would have overlapped with urban, peri-urban, or agricultural environments. This pattern is acknowledged in several studies, such as Riley et al. where bobcat and mountain lion total quantification levels of AR exposure were associated with human-developed areas. Numerous studies have documented that secondary poisoning cases are closely associated with recent agricultural or urban pest eradication efforts. The majority of habitat that fishers in California and fishers throughout the DPS currently and historically occupied is not within or near agricultural or urban settings. Several fishers that were exposed had been monitored their entire lives and inhabited public or community lands where human structures are rare or non-existent . Therefore, exposure from first or second generation AR use at or within 50 feet of residential or agricultural structures and settings were considered unlikely due to fisher habitat requirements and general lack of association with humans. This suggests that wide-spread non-regulated use of second generation second generation ARs is occurring within the range of fishers in California, especially on public lands. A likely source of AR exposure to fishers is the emerging spread of illegal marijuana cultivation within California public and private lands. In 2008 in California alone, over 3.6 million outdoor marijuana plants were removed from federal and state public lands, including state and national parks, with thousands of pounds of both pesticides and insecticides found at grow sites. In 2011, a three week eradication operation of marijuana cultivation removed over 630,000 plants and 23,316 kg of trash including 68 kg of pesticides within the Mendocino National Forest in the northern California fisher populations range. Anticoagulant rodenticides and pesticides are typically dispersed around young marijuana plants to deter herbivory, but significant amounts of AR compounds are also placed along plastic irrigation lines used to draw water from in order to deter rodent chewing .

A recent example in which over 2,000 marijuana plants were removed less than 12 km from one of the project areas revealed that plants on the peripheraledges as well as nearby irrigation had large amounts of second generation AR placed . Finally, just within a single eradication effort, multiple kilometers of irrigation line within National Parks and Forests in California were removed. Placement of ARs at the grow sites and along irrigation lines which jut out great distances from the grow site itself may explain why there are no defined clusters of AR exposure. It is noteworthy that the AR fisher mortalities we documented occurred in different areas of their California range but within a relatively short seasonal period between mid-April to mid-May. We cannot specify the exact explanation or source contributing to all AR mortalities that occurred within this short temporal period. This period is when females are providing for offspring as well as males searching for mates; however, preliminary spatial data for fishers in California document that females have more confined home-ranges during this period, while males have slightly larger home-ranges . Additionally, several books available to the general public identify the optimal time for planting marijuana outdoors is during mid to late spring, and seedlings are especially vulnerable to rodent pests . Of additional concern is that April to May is the denning period for female fishers and a time when fisher kits are entirely dependent on their mothers. The documentation of a lactating female mortality attributed to AR toxicosis during this period suggests that most likely kits would be abandoned and die from female mortalities during this time. In conclusion, this study has demonstrated that fishers in the western DPS, which are of conservation concern and a candidate for protection under the Endangered Species Act, are not only being exposed to ARs, but ARs are a direct cause of mortality and indirect mortality in both of California’s isolated populations. Consequently, these toxicants may not only pose a mortality risk to fishers but could also pose significant indirect risks by depleting rodent prey populations upon which fishers depend.

The lack of spatial clustering of exposed individuals suggests that AR contamination is widespread within this species’ range and illegal or irresponsible use of ARs continues despite recent regulatory changes regarding their use. Because we do not know the long term ecological ramifications of these toxicants left on site long after marijuana grows are dismantled, heightened efforts should be focused on the removal of these toxicants at these and adjacent areas at the time of dismantling. Further regulation restricting the use of ARs to only pest management professionals as well as continued public outreach through state wide Integrated Pest Management programs may be warranted. In addition, promotion of compounds that do not possess the propensity for secondary poisoning should be considered in non-professional use settings. Furthermore, ARs in these habitats may pose equally grave risks to other rare and isolated California carnivores such as the Sierra Nevada red fox , American marten , wolverine , gray wolf or raptors such as northern spotted owls , California spotted owls and great gray owls . Future research should be directed to investigating potential risks to prey populations as well as other sympatric species that may allow a better understanding of the potential AR sources contributing to these exposure and mortality rates from anticoagulant rodenticides.Edge detection is the first step of human visual perception and is fundamentally important in the human visual system . Edge detection significantly reduces the amount of data to be processed, vertical farming racks since it extracts meaningful information and preserves important geometric features. To detect the edges of an object, the object information is processed by either digital computation or analog computation. In practice, as an optical analog computation element, spatial differentiator enables massively parallel processing of edge detection from an entire image, which offers advantages over digital computation: It can deal with realtime and continuous image processing with high speed and is power-saving in specialized computational tasks . During the past few years, optical meta materials and meta surfaces have been suggested to perform analog spatial differentiation for edge detection which show superior integration capability compared with the traditional bulky system comprising lenses and spatial filters . A suitably designed meta material structure was theoretically proposed to perform desired mathematical operations including edge detection as light propagates through it . Deliberately designed layered structure was also suggested for spatial differentiation when an incident beam is reflected from it . Plasmonic dark-field microscopy utilizes near-field surface plasmon waves to excite the object, and can be also treated as an efficient approach for edge detection . However, to the best of our knowledge, free space broadband edge detection has not been reported yet because either the system can only be applied for surface imaging or the fabrication involved is too complicated . Here, we propose a mechanism to implement an optical spatial differentiator consisting of a designed Pancharatnam–Berry -phase meta surface inserted between two orthogonally aligned linear polarizers . Unlike other spatial differentiator approaches, our method does not depend on complex layered structures or critical plasmonic coupling condition, but instead is based on spin-to-orbit interactions. Experiment confirms that broadband optical analog computing enables the edge detection of an object and achieves tunable resolution at the resultant edges. Furthermore, meta surface orientation-dependent edge detection is also demonstrated experimentally.As shown in Fig. 2A, the sample is made of form-birefringent nanostructured glass slabs. The diameter of the glass substrate is 2.5 cm, the thickness is 3 mm, and the pattern area of the sample is 8 mm by 8 mm. The meta surface pattern is fabricated by a femtosecond pulse laser inside of glass, 50 μm beneath the surface. Under intense laser irradiation, a plasma of high free electron density is generated by a multi-photon ionization process. The interference between the plasma and the incident light beam leads to the stripe-like nanostructure as reported . By carefully controlling the polarization of incident beam, the desired orientation of the nanostructure, which is perpendicular to this polarization, can be obtained. More fabrication details could be found in our previous work . We utilized the polariscopy method to demonstrate the local optical slow-axis orientation of this birefringent structure .

As shown in Fig. 2B, crossed linear polarizer imaging under 80× magnification emphasizes the transverse gradient pattern of the optical axis, which corresponds to the dotted square area in Fig. 2A. We can clearly see the local orientation of the microscopic structures, i.e., the slow-axis distribution φðx,yÞ of laser-induced form birefringence. The red bars indicate the orientation of the slow axis over one period of this sample. The nanostructures are on the order of 30∼100 nm, as indicated from the scanning electron microscope image in Fig. 2B, Inset. The structure dimension is much smaller than the working wavelength, therefore we can treat it as a birefringence medium with spatially variant optical slow axis. When the light beam passes through the designed inhomogeneous birefringent medium with locally varying optical axis orientations and homogeneous retardation, it will acquire a spatially varying PB phase .The first lens yields the Fourier transform of the object at its back focal plane, which is exactly the position of the meta surface. In turn, the second lens performs another Fourier transform, delivering a duplicate of the object. When the light passes through the 4f system, we obtain two vertically shifted LCP and RCP images with overlapping area being linear-polarized as shown in Fig. 3 A–C. The amount of shift of the two images is difficult to see due to the small phase gradient of the meta surface. To block the overlapping area while preserving circularly polarized edge, we put an analyzer after the meta surface so that only the edges can go through, as displayed in Fig. 3 D–F. The wavelengths are chosen as 430, 500, and 670 nm, which not only confirms the proposed concept of edge detection, but also demonstrates its broadband capability. The broadband property of our meta surface originates from the geometric phase of the nanostructure orientation, which is intrinsically independent of wavelength . Additionally, the transfer function of the whole edgedetection system is experimentally measured and provided in SI Appendix, Fig. S3, which shows a typical response for the edgedetection function . Additionally, we demonstrate the tunable resolution of the edge images corresponding to different PB phase gradient period Λ. For this experiment, we choose the UCSD Triton insignia as our object . Fig. 4 A–D shows the photos of four meta surfaces with Λ equal to 500, 750, 1,000, and 8,000 μm. Fig. 4 E–H corresponds to the polariscope optical image of the meta surfaces of the first row, which shows different numbers of period within the same field of view .

Absentee owners were also more likely to be concerned that growers were taking over public land

We provided 36 statements that corresponded to four themes: community ; the environment ; changes over time in property values, community safety, community demographics and so on and grower demographics . Respondents were asked to agree or disagree with the statements using a 5-point Likert scale and were able to provide comments after each subsection. The third section of the survey solicited background information about each respondent. Respondents were asked whether they earned income from timber, ranching or dairying, how long their families had owned the land they worked and whether they were absentees. In addition, we asked landowners if they had been approached about selling their land for cannabis cultivation and if they had next-generation succession plans for the family ranch or timber business. We also asked if landowners knew of nearby cannabis growing.As indicated previously, all respondents included in our survey owned at least 500 acres of land. Twenty two percent owned between 500 and 1,000 acres, 51% owned between 1,000 and 5,000 acres and 28% owned more than 5,000 acres. Of the 69 landowners whose responses were included in our results, 63 respondents managed timberland and 56 respondents managed ranchland, meaning that most respondents managed both land types; only one respondent was involved in dairy farming. Forty-six percent of respondents lived on their properties full time, while 20% lived on their properties part time. Thirty-three percent of respondents were absentee landowners. In general, bud drying rack the land represented in the survey had been in respondents’ families for a long time — more than 50 years in 81% of the cases, 25 to 50 years in another 10% of the cases, less than 25 years in 6% and less than 5 years in only 3% of the cases.

Fifty percent of respondents reported that their primary income was from traditional forms of agriculture or timber production; no respondents reported cannabis as their primary income source.Seventy-one percent of landowners reported that they did not grow cannabis on their property while 18% reported that they did. These percentages, however, are derived only from the 34 of 69 respondents who agreed or disagreed with the statement that they had used their property to grow cannabis. The remaining respondents — half the total — chose not to indicate whether they had grown cannabis, potentially indicating landowners’ reluctance to associate themselves with the cannabis industry. About 40% of respondents had indirectly profited from cannabis through off-farm work such as heavy equipment work, trucking and so on . Fifty-seven percent of all respondents agreed or strongly agreed with the statement that “the cannabis industry has negatively affected my livestock operations,” while 27% disagreed with this statement. Over 60% of respondents agreed that cannabis had increased the cost of labor. Comments that respondents offered on the cost of labor included “Property values are inflated by the cannabis industry, hence costing us more for leases and ownership.”Seventy-five percent of respondents agreed or strongly agreed with the statement that “shared roads have been degraded by cannabis growers” and 65% agreed that noise pollution has increased due to cannabis growing. Fifty-five percent of respondents agreed that growers increase light pollution and 71% reported having experienced illegal garbage dumping by cannabis growers on or near their property. Forty percent of landowners disagreed or strongly disagreed with the statement that “I know growers who have values that align with my own” . At the same time, 34% of respondents agreed or strongly agreed with that statement . One respondent added that “[M]onetary impact is obvious.

Cultural and moral impacts are terrible.”Fifty-six percent of respondents agreed or strongly agreed that water sources have been impacted by cannabis growers, while 25% disagreed with this statement. Fifty-six percent also agreed that water had been stolen from their property. Seventy-two percent of respondents had experienced trespassing, while 20% had not. Forty percent of respondents reported that their fencing or infrastructure had been destroyed by cannabis growers, though a similar percentage had not. Fifty percent of landowners reported that neighboring growers had failed to assist with fence maintenance, and 75% of landowners reported having discovered trespass grows on their property . One respondent added that “[Growers’] dogs killed our cattle. My brother confronted a grower in fatigues carrying an assault rifle on our property. [Our] fences have been wrecked, roads damaged, and stream water theft.” Another respondent wrote that “Yes, this is true in the past, but with the pot market collapsing I don’t think this will be a problem in the future”.Roughly 55% of landowners reported having been threatened by cannabis growers’ dogs while 24% did not. Forty-six percent of landowners reported that their safety had been threatened by growers. Equal proportions of landowners reported, and did not report, having felt unsafe due to interactions with growers on public lands. Finally, 50% of landowners agreed that growers had committed crimes against them or their Property.Perceptions of cannabis growers were relatively unified among survey respondents. A majority of respondents did not perceive growers as having values similar to their own . The majority of landowners felt that growers had changed how it feels to live in their community , and 77% of landowners expressed concern about the changes that growers are bringing to their community. More than 80% of respondents were concerned about growers taking over working lands in their communities, and the same percentage were concerned that growers reduce the influence in the community of timber managers and ranchers.

One respondent wrote that “The bottom line is that our family would accept the negative economic impact of eliminating ‘pot’ in return for the elimination of all the negative impacts of the grower culture.” More than 90% of respondents agreed that growers from urban locations do not understand rural land management. Most landowners disagreed that growers are reinvigorating their rural communities or that growers are the only thing keeping their communities going . Eighty three percent of respondents disagreed with the statement that growers do a good job of policing themselves. Most landowners have not changed their views on cannabis with medical or recreational legalization .The clear majority of respondents did not think cannabis growers manage timberlands sustainably and a similar percentage felt the same about ranchlands. Eighty-five percent of respondents regarded cannabis growing as negatively affecting wildlife and 87% regarded it as negatively affecting stream flow . Eighty-four percent thought cannabis growing leads to soil erosion and 70% thought it increases fire hazard. Seventy-eight percent believed that cannabis production in ranchlands and timberlands leads to habitat fragmentation and the same percentage suggested that the economic value of cannabis incentivizes the subdivision of large parcels.Fifty percent of landowners felt that their property value had increased due to cannabis production while 40% were neutral on that question. Eighty-three percent of respondents thought that Humboldt County was a safer place before cannabis and 76% of respondents perceived new cannabis growers as less responsible than cannabis growers who have been in the county for years. About half of respondents believed that increased cannabis legalization will be good for Humboldt County. Fifty-seven percent of respondents were not yet willing to accept that cannabis is a leading industry and that people should support it. Fifty-four percent of respondents believed that Humboldt County would be better off in the future without cannabis.Most landowners included in the survey reported having observed changes in grower demographics in the last decade. Most felt that the number of small cannabis growers is decreasing. Sixty-one percent felt that the number connected to organized crime is increasing and perceived that there is an increasing number of green rush growers in their communities. Most respondents were concerned about organized crime, vertical grow rack system while only 48% were concerned with green rush growers and 18% with small growers.Overall, resident and absentee owners expressed similar views on most issues. Of the survey’s 59 statements on experiences and perceptions, statistically significant differences between the two groups appeared for only eight statements. Absentee owners were more likely to report that their surface water resources had been impacted by growers; that their fences or infrastructure had been destroyed by growers; that their safety had been threatened by growers and that they had been threatened by growers on public land. They were less likely to agree that growers manage timberland sustainably and that cannabis production decreases their property values.

With this study, we aimed to better understand the experiences and perceptions of traditional agricultural producers — the families who, in most cases for several generations, have made a living off their land, all the while watching changes occur in the social, economic and environmental dynamics that surround cannabis. This survey’s documentation of social tensions may not come as a surprise to those who have lived in Humboldt County . Even after many decades of cannabis cultivation, traditional agricultural producers have not warmed to the people or practices involved in the cannabis industry. Indeed, changes in the social fabric of the cannabis industry have only perpetuated and intensified existing tensions. As this survey shows, concerns about “small growers” are minimal now — those growers have become part of the community, and one-third of respondents agreed that they know growers whose values align with their own. What was novel 40 years ago is now a cultural norm. Today’s concerns center instead on the challenges of current cannabis culture: environmental degradation and the threat of major social and economic change. Respondents mostly agreed that growers today are less reasonable than those who have been in the county for many years. As one respondent wrote, “Growers are a cancer on Humboldt County.” This distrust highlights the challenges that, in rural areas, can often hinder community-building and mutual assistance mechanisms, which are often needed in isolated communities . The economic influence of cannabis can be seen throughout the county. As the survey shows, approximately 40% of respondents have been impacted indirectly by the cannabis industry, and some respondents have directly profited through cannabis production themselves. Interestingly, just over half the respondents chose not to say whether they grow cannabis, hinting at the possibility that, even for traditional agricultural producers, cannabis has presented an opportunity to supplement income and cover the costs of landownership. However, the broader economic growth attributed to the cannabis industry is not always viewed favorably, and a majority of respondents agreed that Humboldt County would be better off in the future without cannabis. Some respondents claimed that the industry has increased the cost of labor and that, in many cases, it can be difficult to find laborers at all because the work force has been absorbed by higher-paying cannabis operations. Likewise, many respondents agreed that land values have increased because of cannabis. But for landowners whose property has been passed down through generations, and who have little intention of selling, increased land values translate into increased taxes and difficulty in expanding operations, both of which can be limiting for families who are often land-rich but cash-poor. One respondent wrote, “Yes, the price of land has gone up… but this is a negative. It increases the inheritance tax burden, and it has become so expensive that my own adult children cannot afford to live here.” In Humboldt County’s unique economic climate, it’s difficult for most landowners to decide whether the opportunities the cannabis industry provides are worth the toll that they believe the industry takes on their culture and community — it’s not a simple story. As one respondent noted, “If I had taken this survey 40 years ago, my response would have been very different. With Humboldt County’s poor economy, everyone is relying on the cannabis industry in one way or another.” Our survey provides an important baseline from which such changing attitudes can be measured. Our results should be seen in the context of larger trends involving population and agricultural land in Humboldt County. At the time we were preparing our survey, property records indicated that slightly more than 200 landowners in the county owned at least 500 acres; these individuals made up our survey population. Past research, however, has documented that cannabis was likely grown on over 5,000 distinct parcels in Humboldt County in 2016 .

The iceberg was not a single event but a series of events so the southerly flow was effectively blocked for a decade

We believe that this occurred in the early 2000s although it could have started after 1989 when we last visited the structures. We lack information on the most obvious and interesting observations: the mode of reproduction, the settlement biology, and the growth of this interesting sponge. The obvious questions relate to the explanation of the event. We have no knowledge of the actual propagules or the settlement, only recruitment to a size that can be seen and identified. There are no published descriptions of dispersal propagules of A. joubini, their settlement preferences, or their growth rates. We have seen very small buds that we assume are asexually produced by another hexactinellid, R. antarctica, and we have collected them in the water column in strong currents. Thus, we know that asexually produced buds can move through the water column where they could in principle be entrained and lifted by strong tidal currents; however, we have not seen R. antarctica or any other hexactinellid beside A. joubini on any of our settling surfaces. To our knowledge, there is no evidence of any Antarctic hexactinellid sponge demonstrating sexual reproduction, although it has been seen elsewhere. In our cases A. joubini propagules must have been abundant, at least around the gangplank on Ross Island and at Explorers Cove where there was massive recruitment high in the water column. Given the heavy recruitment observed on artificial surfaces well above the seafloor, we suggest that swimming larvae are released episodically. Why is the A. joubini recruitment predominantly on artificial surfaces? We have no data to address this interesting question, equipment for growing weed but we hypothesize that there are more predators on natural substrata and that these predators serve as a strong filter on the survivorship of the propagules as discussed by Thorson.

Oliver and Slattery offer strong evidence of the efficiency of a microcanopy of carnivorous invertebrates near the gangplank, and Suhr et al. demonstrated that three of the most common foraminifera, especially Astrammina rara, consume metazoa including planktonic invertebrates in Explorers Cove. Out of this, it is reasonable to speculate that benthic predation filters settling larvae as discussed by Thorson. Another obvious question relates to the fact that we saw no measurable growth of many naturally occurring A. joubini between 1967 and 1989, yet beginning sometime between then and 2004 they exhibited tremendous growth. With the exception of two small sponges, none of the structures had any A. joubini in 1989. However, in 2004 these structures were photographed with very large sponges that presumably had settled after 1998, but certainly no earlier than 1990 , and by 2010 sponges had obtained diameters ranging from 7 to 72 cm . Further, the estimated mass of a sponge observed on an artificial substrate at Cape Armitage in 2010 increased about 30% when it was re-photographed in 2012. Clearly, rapid growth rates are possible by A. joubini. What environmental factors were responsible for this sudden growth? The most likely correlate with the growth if not the settlement was a probable shift in plankton composition. Typically the transport of abundant primary production from the north results in a seasonal plankton bloom composed of relatively large phytoplankton . However, in the 2000s a series of large icebergs were grounded, blocking this transport and preventing the annual ice from breaking up and going out until 2011. The icebergs and thick sea-ice probably interfered with the advection and growth of the large phytoplankters that usually dominate in the water column. Thrush and Cummings and Conlan et al. summarized many populations that were negatively impacted by the lack of advected primary production over this decade.

The dynamics of A. joubini were also correlated with this phenomenon, and we suggest that changes in the plankton may have resulted in a shift from large phytoplankters to tiny dinoflagellates and bacteria. Margalef postulated such a relationship in water columns to result from reduced resources. Sea ice thickness and transparency affects benthic productivity and ecosystem function. Montes-Hugo et al. , described such regional changes in the Western Antarctic Peninsula suggesting a strong relationship between ice cover and the size of the phytoplankton. Orejas et al. and Thurber 2007 discuss the strong relationship between microplankton and Antarctic sponges. Reiswig and Yahel et al. , working on other hexactinellid sponges, demonstrated that they retain only very small particles of bacteria and protists. As hexactinellids in general seem restricted to feeding on tiny particles, the shift in plankters may have offered a strong pulse of appropriate food for A. joubini, triggering rapid growth that was previously not observed in this species. Moreover, our observations of relatively fast growth following a shift in the food is supported by Kahn et al. who report relatively fast temporal changes in the density of two deepwater hexactinellid sponge species in 4,000 m depth off Monterey, California, USA. These density shifts occur with a lag of 1–2 years following shifts in the food supply of the micro-particles they consume. Although A. joubini growing on the gangplank had a broader weight distribution than the same species growing on the floaters in Explorers Cove , we are hesitant to attribute these differences to the site location. It is very likely that the individual sponges that fell off the racks and floaters in Explorers Cove were larger than the sponges that remained on these substrata. Therefore, the measurements from these two substrata at Explorers Cove could be skewed to smaller-sized individuals. We also have preliminary but convincing evidence of A. joubini mortality.

Although we were not able to relocate all transects in 2010 and therefore may have missed some surviving sponges, at least 67 large A. joubini died in the 40 years of this program with no known survivors. We have no reason to question earlier observations that some mortality results from predation by A. conspicuous and the amphipod S. antarctica. Additionally, Cerrano et al., report patches of diatoms inside A. joubini, but speculate that the diatoms had invaded and are detrimental to the sponges. We agree and have seen the amphipod, S. antarctica, eating patches of the sponge that subsequently are colonized by diatoms. In 2012 we photographed considerable evidence of incipient amphipod infestation on A. joubini at the gangplank; however, the actual mortality sources within this study are not known and some may reflect ice formation on the sponge that kills the tissue in a patchy manner, later becoming infected with S. antarctica. We emphasize that many of these large A. joubini surely do live longer, and we are only considering sponges in our localized study sites, but this is still a very high mortality rate for a species of sponge thought to be long-lived. Summarizing the A. joubini observations of massive recruitment and growth and rapid mortality, we suggest that this sponge has much more dynamic life history than previously suspected. What of the other Hexactinellida in our study sites? We know that R. antarctica grows relatively fast as this was studied in the 1970s. We observed surprisingly fast growth and asexual reproduction of mature individuals and we also observed some 40 very small R. antarctica buds to increase their volume as much as two orders of magnitude . This species is by far the dominant sponge in the 25–50 m depth range at McMurdo Station, but it is so inconspicuous that it is extremely difficult to evaluate the population patterns. Obviously it has the potential to multiply relatively quickly, yet we have no evidence of sufficient mortality to balance the reproduction and growth rates observed. The other common Antarctic hexactinellid is R. nuda/racovitzae. This knobby, volcano-shaped sponge is smaller than A. joubini and remains an enigma with regard to its population dynamics and growth rate. Prior to the removal of the cages in 1977, seven R. racovitzae survived inside cages , while 2 died inside their cages. Those survivors did not show significant growth during that time period. The mortalities may have resulted from sea star predation or infestation of S. antarctica. Our extensive surveys in 2010 may have come across a few young R. nuda/racovitzae although they were not collected and we are not sure of their identification. It is interesting to note that Fallon et al., report a relatively-small, 15 cm diameter specimen from the Ross Sea was approximately 440 years old. Many of the R. racovitzae in our area were at least a meter tall, so this species might obtain great age. Rossella fibulata is a rare sponge in the McMurdo Sound area; however, two individuals settled on a rack at Explorers Cove and on a cage at Cape Armitage. It appears to grow rapidly but otherwise little is known of its biology. In any case, the four hexactinellid species in this shallow habitat certainly have different life history patterns, with the fast turn-over of A. joubini being the most surprising. Our observations complement those of Teixido´ et al. who report high frequencies of asexual reproductive strategies in three deep-water Hexactinellida in which 35% of the observed R. nuda were actively budding. In addition, grow tables 4×8 many R. racovitzae exhibited reproduction by fragmentation while R. vanhoeffeni reproduced with bipartition. Thus, it appears that each of the Antarctic Hexactinellida species exhibits different life history biology.

In summary, these observations allow us to test and reject the prevailing notion of slow rate processes for both recruitment and growth of A. joubini. The population dynamics imply that A. joubini are fast to respond to an environmental shift, but the population increase may be relatively short and we need to re-evaluate ideas of slow processes and stability over century time scales. These surprising results are set in a time of climate- and fishing-related environmental changes. Certainly these results demonstrate the great importance of comprehensive, long-term data sets designed to better understand such processes. Voucher specimens collected in the 1960s were sent to the Smithsonian Oceanographic Sorting Center and the specimens seem to be lost; however, a collection of specimens is available at the Scripps Invertebrate Collections.Adolescence is a critical period of development marked by the formation of self-concept and identity, independence from parental guidance, and growth in cognitive and socioemotional skills such as empathy, resilience, and creativity. However, some adolescents also begin to engage in risky behaviors, such as use of tobacco, cannabis, alcohol and other substances. These behaviors are significant, as they can negatively influence this important developmental period and contribute to a vicious cycle whereby risky behaviors interfere with school engagement and academic performance and vice versa. This negative feedback loop is suggested by Richard Jessor’s Theory of Problem Behavior, which proposes that school climate, including the social environment of peers, contributes to adverse adolescent behaviors and outcomes including school disengagement, risky behaviors, and academic failure. These adolescent behaviors in turn influence the school climate, as when groups of students normalize delinquent behaviors, undermining academic engagement more broadly. This vicious cycle in adolescence can have significant downstream effects in adulthood, potentially affecting educational and socioeconomic opportunities as well as overall health outcomes. While Jessor’s theory suggests reciprocal effects between a negative school climate and adolescent risky behaviors, it may also suggest that a positive school climate could create a virtuous cycle of improved academic success, greater school engagement, academically and prosocially supportive peers, and better academic and behavioral outcomes among teens. This is supported by prior literature which has shown that positive school climate is linked to better academic performance, student well being, and school engagement, and lower rates of problem behaviors such as disruptive, antisocial, violent, bullying, or delinquent behavior. Although there is no standardized measure of school climate, there are several domains which have been used to characterize school climate and show predictive potential, among them: the institutional environment, student-teacher relationships , and disciplinary styles. However, prior studies have primarily only examined a limited set of school climate variables and adolescent risky behaviors and most have been limited to cross-sectional designs. As a result, it is still unknown which aspects of school climate might be targeted to improve specific academic or health outcomes. The present study sought to identify and compare associations between school climate measures across multiple domains and multiple downstream health and academic outcomes longitudinally.

The thermal pathways appeared more efficient under the temperature conditions tested

In other words, the eliquid will be entirely VG well before the e-liquid reservoir is depleted. The predicted percent of e-liquid remaining at full VG enrichment in the model is fairly insensitive to starting volume in the e-liquid but is sensitive to starting PG:VG ratio and temperature, as expected. Thus, a user may be inhaling high relative concentrations of acrolein and other predominant VG products in the aerosol for a significant amount of time during the e-liquid cartridge or reservoir lifespan.The vaping process for e-cigarettes is complex and dynamic, possibly more so than currently appreciated. Coil temperature, puff duration, and PG:VG ratio all significantly affect both theaerosol production and the composition. Most of the mass that was lost from the e-liquid could be accounted for as PG and VG. Furthermore, volatile/semivolatile compounds dominated the total aerosol. Caution should be exercised when collecting particles with dense filter material or with overloaded filters for studying the particle phase, as the semivolatiles can be trapped and interpreted as particulates. In general, the chemical mechanisms for forming carbonyls appear to be well understood, and consistent with the numerous insights gained from interpreting the carbonyl mass yield as normalized by aerosol mass. Some exceptions include acetone, for which there may be a radical pathway from VG not currently accounted for, and acetaldehyde, for which there may be a thermal pathway from PG. Importantly, drying room the user’s exposure to toxic carbonyls such as acrolein may change during the vaping process, and the user may be exposed to high relative content of VG and its degradation products as the e-liquid is depleted.

These findings support the need for further research into aerosol composition and toxicology as a function of the e-cigarette puffing life cycle, in addition to e-liquid composition, puffing regimen, and vaping device operational conditions.The unexpected outbreak of e-cigarette or vaping-associated lung injury was reported nationwide starting in September 2019, causing more than 2800 hospitalizations and 60 deaths. The specific biological mechanisms of EVALI, as well as the chemical causes, are still under investigation. Emerging evidence shows that EVALI is associated with vaping tetrahydrocannabinol containing e-liquid cartridges that were obtained on the black market. Although adverse health effects of vaping THC cartridges have been found to include abdominal pain, nausea, chest pain, shortness of breath, and acute respiratory distress, they have not to date been fatal. The sudden deaths and hospitalizations from EVALI are, instead, strongly linked to a compound called vitamin E acetate , the chemically-stable esterified form of vitamin E . VEA is thought to be used as a cutting agent in THC cartridges because it has a similar viscosity to THC oil, so that the adulteration will not be visually evident. FDA labs confirmed that VEA was present in 81% of THC-containing vaping cartridges confiscated from 93 EVALI patients. VEA was also found in the bronchoalveolar fluid samples from 48 of 51 patients, but not found in samples from the healthy comparison control group. The VEA fraction in vaping cartridges confiscated from EVALI patients range from 23% – 88%. The interaction between aerosolized VEA with lung surfactant, the toxicity of VEA thermal degradation products, or other components in the vaping aerosol of extracted THC oil have been hypothesized to explain the association of VEA to EVALI. It should be noted that there is currently not sufficient evidence to rule out the contribution of other diluents, flavoring additives, pesticide residues, or other ingredients found in THC cartridges.

It’s also not known if VEA has a synergistic effect with THC oil components that may lead to EVALI. A limited number of recent research publications has focused on either the physical and chemical properties, or the biological effects of the vaping aerosol from VEA. DiPasquale et al.observed VEA was capable of reducing the elastic properties of pulmonary surfactant and thus cause lung dysfunction by alveolar collapse or atelectasis. Lanzarotta et al. found evidence for hydrogen bonding between VEA and THC in both vaping aerosol and unvaped e-liquid, suggesting they may synergistically cause EVALI. Wu et al. showed that the toxic gas ketene, as well as carcinogenic alkenes and benzene are generated from the thermal degradation of VEA. RiordanShort et al. found that pure VEA starts to decompose at an incubation temperature of 240 °C and identified over 40 kinds of thermal degradation product at an incubation temperature of 300 °C, 30 of which are carbonyls and acids. However, the experiments of Riordan-Short was done under heated headspace sampling as a surrogate vaping environment, instead of a real vaping environment in an e-cigarette tank with metal coil, where temperature gradients exist due to localized coil heating. Furthermore, different coil material and surface area will have different effects on thermal degradation chemistry. Jiang et al. reported a total of 35 toxic byproducts during the vaping of commonly used diluents including VEA; over 25 of them are carbonyl compounds. Compared to VEA, there is less research available on the vaping chemistry of THC oil extracts and other cannabinoids due to DEA regulations, even though the metabolism of THC has been well studied.Meehan-Atrash et al. hypothesized that THC emits similar thermal degradation products to terpenes given their terpenoid backbone; however, terpenes are also found in cannabis plants and can be used as additives in e-liquids, such that the degradation products may be difficult to distinguish from THC. It was also found that vaping and dabbing cannabis oil including terpenes may cause exposure to concerning degradants such as methacrolein, benzene, and methyl vinyl ketone.

Adding terpenes to THC oil led to higher levels of gas-phase products compared to vaping THC alone. Since vaping is a complex and dynamic process, a systematic understanding of the chemistry occurring during the vaping process is needed to assess potential factors that may contribute to EVALI, as well as other potential adverse health effects. In this work, a temperature controlled vaping device with accurate coil temperature measurement was used to vape e-liquids of VEA, extracted THC oil, and their mixture under typical vaping conditions consistent with the CORESTA standard. Gravimetric analysis was used to evaluate the aerosolization efficiency, while the high performance liquid chromatograph coupled with high resolution mass spectrometry was used to characterize thermal degradation products including carbonyl compounds, acids, and cannabinoids using the methods developed by Li et al. A comprehensive thermal degradation mechanism for THC and VEA are proposed, which could be useful for regulation and further research.A temperature-controlled third generation Evolv DNA 75 modular e-cigarette device with a refillable e-liquid tank and single mesh stainless steel coils was used for aerosol generation . The mod enabled variable output voltages with coil resistance of ~0.12 ohm. Evolv Escribe software was used to customize the power output in order to achieve the desired coil temperature. The coil temperatures were measured by a flexible Kapton-insulated K type thermocouple in contact with the center of the coil surface and output to a digital readout. The temperature set by the device is not truly representative of the measured coil temperature, as often, vertical farming units the device flow rate, e-liquid viscosity, and coil resistance changes will alter the relationship between applied power and output coil temperature that drives chemistry. The puff duration is 3 s with a flow rate of 1.20 ± 0.05 L/min, quantified by a primary flow calibrator , corresponding to puff volume of 60 ± 2.5 mL. The puff volume and puff duration selected in this work is consistent with e-cigarette test protocols applied to propylene glycol /vegetable glycerin based e-cigarettes.The e-liquids used for vaping in this work are: pure VEA that was used as purchased, extracted THC oil that is commercially obtained from Bio-pharmaceutical Research Company , and the mixture of the two ingredients . All THC experiments are performed at the BRC facility under an active DEA Schedule 1 license. Thecomposition analysis by gas chromatography of unvaped extracted THC oil showed that the most abundant cannabinoids are: Δ 9 -tetrahydrocannabinol , Δ9 – tetrahydrocannabinol acid and cannabigerol acid , while other cannabinoids were identified below 3% of the total peak area . Δ 8 -THC, which can be observed at 0.3 minutes after the Δ 9 isomer, was not detected in the mixture. A total of over 50% of mass in unvaped extracted THC oil remain uncharacterized, but presumably contains terpenoids and potentially other alkanes and alkenes. Three temperatures were chosen for the particle generation, with a temperature measurement deviation of 10 °F. The quantification of carbonyls is only reported at 455 °F. During the sample collection, a total of 10 puffs of aerosol with a frequency of 2 puffs/min were collected for each sample. Carbonyls, acids and cannabinoids in vaping aerosols , which represent a large portion of expected products, were collected onto 2,4- dinitrophenylhydrazine cartridges for HPLC-HRMS analysis. The consecutive sampling with three DNPH cartridges shows a collection efficiency >98.4% for carbonyl-DNPH adducts in the first cartridge. Excess DNPH is conserved in the cartridge after the collection to maximize collection efficiency.

DNPH cartridges were extracted with 2 mL of acetonitrile into autosampler vials and analyzed by HPLC-HRMS. Consecutive extractions of DNPH cartridges for samples confirmed that >97% of both DNPH and its hydrazones were extracted after the first 2 mL volume of acetonitrile. The collection efficiency for cannabinoids is unknown, since only a limited amount of THC oil was available for experiment and not for quality controlcharacterizations. The high resolution mass data of cannabinoids is only used for identification in this work. Details on the collection method are described elsewhere. Moreover, glass fiber filters were used to collect the particles, as has been done in other e-cigarette studies. The particle mass collected on filters was determined gravimetrically on a microbalance by weighing the filter mass immediately before and after puffing at different experimental conditions. The standard deviation of the gravimetric analysis after triplicate measurements was determined to be ∼20%, mainly due to variations in puffing. The sample collection and analysis were performed in triplicate.Carbonyl compounds and acids from the thermal degradation of VEA and THC were derivatized by 2,4-DNPH to form carbonyl-DNPH compounds during the collection process. The detailed mechanism and method of identification for each carbonyl were described in previous work.40 Beside DNPH adducts, HRMS has been proven to be an effective tool for the detection of cannabinoids and their oxidative products, as the phenolic hydroxyl group in cannabinoids can be ionized in both electrospray ionization positive and negative modes, while the high mass precision enables the analysis of elemental composition. Negative mode was applied for the detection in this work as both carbonyl-DNPH adducts and cannabinoids can form negative ions by deprotonation. An external mass calibration was performed using the carbonyl-DNPH standard solution immediately prior to the MS analysis, such that the mass accuracy was adjusted to be approximately 1 ppm for standard compounds, the mass calibration was then applied to a molecular formula assignment for unknown compounds. All molecular assignments were analyzed by the MIDAS v.3.21 molecular formula calculator . Carbonyl-DNPH adducts and cannabinoids in extracts solution were separated and analyzed using an Agilent 1100 HPLC with an Poroshell EC-C18 column coupled to a linear-trap-quadrupole Orbitrap mass spectrometer with an ESI source at a mass resolving power of ∼60 000 m/Δm at m/z 400. The mobile phase of LC−MS grade water with 0.1% formic acid and acetonitrile were applied in the chromatography method. The analytes were eluted over the course of 45 min at 0.27 mL/min with the following gradient program: 40% B , 50% B , 60% B , 80% B , and 40% B . After separation by chromatography, single ion chromatography of each compound were extracted for the quantification of specific carbonyl compounds based on their calibrated m/z. Formaldehyde, acetaldehyde, acetone, butyraldehyde, valeraldehyde, hexanal were quantified using the analytical carbonyl-DNPH standards. The SIC peak separation between isomers of butyraldehyde/isobutyraldehyde, valeraldehyde/isovaleraldehyde hexanal/4-methylpentanal cannot be achieved, so the concentration of all isomers were calculated as a total amount. The concentrations of glyoxal, methylglyoxal, diacetyl were calculated by an estimated ESI sensitivity as described by Li et al.40The thermal degradation of both VEA and THC was observed at the measured coil temperature of 455 ± 10 °F , which is close to temperature that VEA started to degrade in the work of Riordan-Short et al..

All analytes are baseline separated in the chromatographic spectrumusing accurate mass single-ion-chromatography

The mass concentrations of different carbonyls/acids in air were calculated by the total mass concentration of the specific carbonyls/acids in the HPLC-HRMS analysis divided by the total volume of air that flowed through the DNPH cartridge during the vaping collection process.The method reported in this work offers unambiguous identification and a large quantification range for functionalized carbonyl compounds and organic acids. This is useful for studying e-cigarette thermal degradation chemistry, as well as other environmental chemistry topics . A total of nineteen DNPH hydrazones in the e-cigarette aerosol sample were observed : five simple carbonyls, six hydroxycarbonyls, four dicarbonyls, three acids, and one phenolic carbonyl. Hydroxycarbonyls comprised 3 of the top 6 most abundant compounds. Uchiyama et al., recently found that some compounds are emitted purely as gas-phase species , some as purely particulates , and some as both . Both the concentration and phase information is useful for estimation of exposure risk. Much of the chemical identification for DNPH hydrazones can be directly derived from the exact mass of the detected [M-H]- ions alone. As the formation of DNPH hydrazones replaces only one atom , it is straightforward to deduce the original molecular formula of the carbonyl or acid from the hydrazone formula. The chemical structures were confirmed as in 2.3.1. Figure 2.6a shows the total ion chromatography and SIC of select carbonyl-DNPH compounds, Figure 2.6b shows the corresponding integrated mass spectrum of TIC and each SIC. From the TIC, it is clear that ecigarette aerosol is a complex system which contains a large number of carbonyls/acids.

Co-elution is common in the TIC ; however, the SIC isolates the chromatographic peaks of the desired m/z, avoiding co-elution and misidentification. We also found that acetone-DNPH co-eluted with vanillin-DNPH in the chromatography. This will have led to an overestimation of the abundance of acetone using a chromatography method without HRMS, as vanillin-DNPH is not commercially available.Beyond molecular formulas, it is advantageous to confirm the exact bonding sites of carbonyls and other moieties to give insight to chemical mechanisms and aid in theoretical calculations of reaction energies, as these calculations are sensitive to structures. The chemical structure of DNPH adducts was identified by their neutral and radical losses in tandem multistage mass spectrometry using collision induced dissociation , 148,149 which often helps to elucidate the exact carbon location of the moiety-of-interest for small molecules. For example, alcohols adjacent to a beta carbon with an abstractable hydrogen can lose H2O by H-shift rearrangement, 150 while those bonded to aromatic or other non-abstractable sites do not show this loss in the negative ion mode. For nitroaromatics such as DNPH, the electron-withdrawing groups of NO2 exerts a strong stabilizing effect on anion radicals, and facilitates NO2-mediated rearrangements . For small ions like acetaldehyde-DNPH, there is no other reasonable carbonyl structure that exists for the molecular formula, and MSn confirms this structure with expected fragmentation of CH3NO and CH3CHO . However, cannabis grow equipment there are some ambiguous formulas such as C3H6O3, which may belong to structural isomers dihydroxyacetone and glyceraldehyde. Both of these hydroxycarbonyls are proposed to exist in e-cigarette aerosol after NMR analysis, but are impossible to distinguish with chromatography as they have the same UV-absorption and m/z.34 With MSn fragmentation, we found that dihydroxyacetone is the main product.

Even though several fragmentation pathways for these isomers are similar and 269.05→ 239.04 , the H2O loss and C2H4O2 loss that is expected for glyceraldehyde-DNPH were observed to be negligible in the mass spectrum . The preferred formation of dihydroxyacetone over glyceraldehyde supports the radical-mediated oxidation pathways suggested by Diaz et al., as radical abstraction of the H in VG should lead preferentially to a secondary alkyl radical compared to the primary radical . The initiating radicals are suggested to be reactive oxygen species such as hydroxyl radical, and as such, the degradation products can be described by processes that occur in atmospheric chemistry. Some of the products identified here can be expected from the thermal degradation of PG and VG , which is in agreement with the proposed mechanism, while others are likely to be flavoring additives . A shared product ion after fragmentation of the DNPH hydrazones is C6H3N4O3 – , which is the modified DNPH after the O-rearrangement loss of the original carbonyl/acid. Other similar loss pathways are those of the DNPH itself, including loss of HONO, NO2, and NO . There are also distinctive fragmentation pathways for each ion, which are summarized in Table 2.2.While the process of ionization in ESI is complex, it has been demonstrated that there are key factors influencing the ionization efficiency of different compounds. For example, for the same family of compounds, there is a relationship between negative ion electrosprayionization response and pKa of the dissociation equilibrium HA ⇆ A – + H+ , which is directly related to basicity. We calculate the basicity in terms of ΔGdeprotonation , because the deprotonated [M-H]- ion is usually detected in the ESI negative mode. Our calculations of the electrostatic potential maps of carbonyl-DNPH hydrazones show that they have a primary acidic proton ; thus, they are excellent candidates for which gas phase basicity can be used to parameterize ionization efficiency in the ESI negative mode.

We emphasize that the theoretical chemistry results in this work only provide a relative indication of sensitivity, not absolute calibration factors, and only for the same family of compounds that are protonated or deprotonated. The relative theoretical sensitivities are then anchored by absolute ESI calibrations for the carbonyl-DNPH compounds where standards are commercially available.The trend of ΔGd and ESI sensitivity arises from the intrinsic relationship between deprotonation efficiency and the ability of the aromatic product ion to stabilize the negative charge initially formed on the N atom . Acrolein is the most sensitive compound in ESI negative mode because it has conjugated double bonds, i.e., additional pi orbitals for the negative charge to be delocalized. Also, ketones have lower sensitivities than aldehydes because the electron donating group on both sides of the C=N bond slightly destabilizes the negative ions. A limitation of this model occurs for compounds that have similar ΔGd. In this situation, other factors like molecular volume and polarity may also play an important role for these compounds. Despite the limitations, this method is applicable to the compounds found in e-cigarette aerosol and enables the first estimation of concentrations for complex carbonyls that have not yet been quantified with acceptable uncertainty. Furthermore, this computational technique offers an advantage compared to the time expenditure, costs, and chemical usage of synthesizing standards.The calculated concentrations of e-cigarette constituents characterized in this work are shown in Table 2.2 as mass per volume or mass per ten puffs analyzed. The most abundant compounds in the blu e-cigarette aerosol for our study conditions are hydroxyacetone, formaldehyde, acetaldehyde, lactaldehyde, acrolein, and dihydroxyacetone. While, within uncertainty, the exact order of abundance is not definitive, it is clear that hydroxycarbonyls are just as important assimple carbonyls to the composition of the e-cigarette aerosol. Hydroxyacetone has been found to be a major, sometimes dominant, emission in other e-cigarette brands and e-liquids, as quantified by gas chromatography. The agreement of the high abundance of hydroxyacetone lends support to the theoretical approach in this work, which enables all carbonyls and acids to be quantified by the same method. The high abundance of hydroxyacetone may be due to its multiple formation pathways in Scheme 2.2 and its possible role as an impurity in e-liquid, e.g., Sleiman et al., found hydroxyacetone in concentrations of < 1% of the sum of PG and VG in the e-liquids they used. We were not able to test the e-liquid in this work due to cartridge design; thus, are unable to comment on the extent of hydroxyacetone impurity in the e-liquid, if present. Dihydroxyacetone and lactaldehyde, in contrast, have not been regarded as major e-cigarette emissions until their unambiguous identification in this work. Their formation pathways from PG and VG are highly feasible, so their higher abundance is not unexpected. It’s not clear why these compounds have not been reported earlier; we suspect analytical challenges may be a reason. As we discussed previously, lactaldehyde-DNPH co-eluted with formaldehyde-DNPH in the TIC . Thus, HPLC-UV, one of most frequently used instrument for studying carbonyl compounds in e-cigarette aerosol, indoor grow cannabis will not be able to identify and quantify lactaldehyde. However, the HPLC-HRMS method overcomes co-elution challenges by distinguishing compounds based on their exact mass from the SIC and mass fragmentation patterns. Dihydroxyacetone-DNPH appeared to be baseline-separated in HPLC-UV, with a retention time slightly shorter than DNPH itself; however, its unambiguous identification is not possible without HRMS and/or authentic standards. Furthermore, both of these compounds are quite polar, and thus, not conventionally compatible with gas-chromatography.

A comparison of the absolute emission concentrations of thermal degradation products between studies is not straightforward, even for the same brand of e-cigarettes, as the puffing regimens and apparatus of reported works are all different and individual puffing parameters have non-linear effects on the thermal degradation chemistry. Klager et al., also reported high variability of carbonyl concentrations for the same brand, puffing-regimen, and flavor, suggesting that the factors driving the thermal degradation chemistry are not yet fully understood. Our work should be primarily viewed as a demonstration of a new method to the chemical characterization of our specific e-cigarette model at the stated puffing conditions, with noted insights into the thermal degradation mechanism. Formaldehyde, acetaldehyde, and acrolein are known to produce pathological and physiological effects on the respiratory tract. They are known to cause sensory irritation, inflammation, and changes in pulmonary function; formaldehyde is also carcinogenic. The average daily dose of aldehydes can be calculated by the amount of aldehydes per puff multiplied by the average number of puffs a user inhales per day. For example, the median puffs per day for e-cigarette users can be assumed to be 250171, so the average daily exposure dose of formaldehyde is 37.5 µg/day for this e-cigarette device, e-liquid, and operating conditions. The California Office of Health Hazard Assessment Chronic Reference Exposure Levels for formaldehyde is 9 µg/m3 , which could be translated to an acceptable daily dose of 180 µg/day and is higher than the e-cigarette aerosol exposure for formaldehyde in this work. In addition, OEHHA has a No Significant Risk Level recommendation of 40 µg/day which is intended to protect against cancer; this NSRL level is close to the exposure dose of formaldehyde in this work. The average exposure dose of acrolein for blue-cigarettes is 15.2 µg/day according to Table 2.2, which is higher than the OEHHA chREL value . Logue et al. used a similar approach to estimate health impacts and found that both formaldehyde and acrolein can exceed maximum daily doses derived from occupational health guidelines. Differences in results are likely due to the different devices, e-liquids, and puffing regimens used.While the reported emissions in this work may not be generalized to all e-cigarettes and use scenarios, it is informative to compare the aldehyde emissions normalized by nicotine, since ecigarette users transitioning from traditional tobacco products will self-titrate nicotine intake when using e-cigarette products. In this work, the nicotine yield is 10.4 ± 1.9 μg/10 puffs. We did not observe evidence of nicotine oxidation174 under the puffing conditions of this work, which will impact the ratio. The formaldehyde/nicotine ratio is 144 ±32 μg/mg nicotine, which is 4 times higher than the formaldehyde/nicotine ratio in combustible cigarettes . The acrolein/nicotine ratio measured in this work in close to that of tobacco products , while the acetaldehyde/nicotine ratio and propionaldehyde/nicotine ratio are lower than that in combustible cigarettes. Logue et al. observed similar trends using different e-cigarette products; however, the results were not normalized for nicotine so a direct comparison is not possible. Thus, we find e-cigarettes do not necessarily emit lower carbonyl compounds than tobacco products, but the comparisons may change depending on the specific e-cigarettes or tobacco products, or different puffing/smoking regimens. Although hydroxycarbonyls are abundant in e-cigarette aerosol, a general lack of toxicological data precludes health risk assessment. Smith et al. found that exogenous exposure to dihydroxyacetone is cytotoxic and will cause cell death by apoptosis. Glycolaldehyde is also suspected to have biological toxicity. For hydroxyacetone and lactaldehyde, toxicology data are currently unavailable on many toxicology databases like Hazardous Substances Data Bank , European Chemicals Agency and Research Institute of Fragrance Materials .