Monthly Archives: January 2023

The possible significance of this process for hippocampal network activity is discussed in sections IV and VC

There are two possible routes of 2-AG biosynthesis in neurons, which are illustrated in Figure 3. Phospholipase C -mediated hydrolysis of membrane phospholipids may produce DAG, which may be subsequently converted to 2-AG by diacylglycerol lipase activity. Alternatively, phospholipase A1 may generate a lysophospholipid, which may be hydrolyzed to 2-AG by a lyso-PLC activity. In the intestine, where 2-AG was originally identified , this compound accumulates during the digestion of dietary triglycerides and phospholipids, catalyzed by pancreatic lipases . The fact that various, structurally distinct inhibitors of PLC and DGL activities prevent 2-AG formation in cultures of cortical neurons indicates that the PLC/DGL pathway may play a primary role in this process . The molecular identity of the enzymes involved remains undefined, although the purification of rat brain DGL has been reported . As first suggested by experiments with acutely dissected hippo campal slices, neural activity may evoke 2-AG biosynthesis in neurons by elevating intracellular Ca2 levels . In the hippo campal slice preparation, electrical stimulation of the Schaffer collaterals produces a fourfold increase in 2-AG formation, which is prevented by the Na channel blocker tetrodotoxin or by removing Ca2 from the medium. Noteworthy, the local concentrations reached by 2-AG after stimulation are in the low micromolar range , which should be sufficient to activate the dense population of CB1 receptors present on axon terminals of hippocampal GABAergic interneurons. In addition to neural activity,cannabis grower certain neurotransmitter receptors also may be linked to 2-AG formation. For example, in primary cultures of cortical neurons, glutamate stimulates 2-AG synthesis by allowing the entry of Ca2 through activated N-methyl-D-aspartate receptor channels .

Interestingly, this response is strongly enhanced by the cholinergic agonist carbachol, which has no effect on 2-AG formation when applied alone . The molecular basis of the synergistic interaction between NMDA and carbachol is unclear at present but deserves further investigation in light of the potential roles of 2-AG in hippo campal retrograde signaling .The anandamide precursor N-arachidonoyl PE belongs to a family of N-acylated PE derivatives, which contain different saturated or unsaturated fatty acids linked to their ethanolamine moieties and give rise to the corresponding fatty acid ethanolamides . These compounds generally lack CB1 receptor-binding activity but display a number of remarkable effects and possible biological functions. In this regard, two FAE have been studied in some detail, palmitoylethanolamide and oleoylethanolamide .PEA exerts profound analgesic and anti-inflammatory effects in vivo, which have been attributed to its ability to interact with a putative receptor site sensitive to the CB2-preferring antagonist SR144528 . The molecular identity of this site is unknown, although it is probably distinct from the CB2 receptor whose gene has been cloned . PEA is present at high levels in skin and other tissues where, together with locally produced anandamide, may participate in the peripheral control of pain initiation . Despite its chemical similarity with PEA, OEA shows weak analgesic properties but exerts potent appetite suppressing effects in the rat . Because these effects are prevented by sensory deafferentation, and intestinal OEA biosynthesis is linked to the feeding state , it has been suggested that OEA may be involved in the peripheral regulation of feeding .A series of close structural analogs of anandamide with activity at cannabinoid receptors have been isolated from brain tissue. These compounds, which include eicosatrienoylethanolamide and docosatetraenoylethanolamide , may be generated through the same enzymatic route as anandamide, albeit in smaller quantities.

Distinct from these polyunsaturated ethanolamides as well as from 2-AG are two recently discovered brain lipids: 2-arachidonoyl glyceryl ether and O-arachidonoyl ethanolamine. Noladin ether was isolated from porcine brain and identified by using a combination of mass spectrometry, nuclear magnetic resonance, and chemical synthesis. The compound binds to CB1 receptors with high affinity in vitro [dissociation constant 21 nM] and produces cannabinoid-like effects in the mouse in vivo, including sedation, immobility, hypothermia, and antinociception . Virodhamine was identified in rat brain by mass spectrometry and chemical synthesis and shown to weakly activate CB1 receptors in a 35S-labeled guanosine 5-O– binding assay in which the compound also displayed partial agonist activity . Moreover, virodhamine decreases body temperature in the mouse, although less effectively than anandamide, and inhibits anandamide transport in RBL-2H3 cells . A possible confounding factor in these studies is due, however, to the chemical instability of virodhamine, which in an aqueous environment is rapidly converted to anandamide. The formation and inactivation of these molecules, as well as their physiological significance, is the subject of ongoing investigations .Both anandamide and 2-AG may be generated by and released from neurons through a mechanism that does not require vesicular secretion. However, unlike classical or peptide neurotransmitters, which readily diffuse across the synaptic cleft, anandamide and 2-AG are hydrophobic molecules and, as such, are constrained in their movements through the aqueous environment surrounding cells. How may these compounds reach their receptors on neighboring neurons? Experiments with bacterial PLD suggest that, in cortical neurons, 40% of the anandamide precursor N-arachidonoyl PE is localized to the cell surface , which also contains 2-AG precursors such as phosphoinositol phosphate and bisphosphate . This suggests that both endocannabinoids may be generated in the plasmalemma, where they are ideally poised to access the external medium. As with other lipid compounds, the actual release step may be mediated by passive diffusion and/or facilitated by the presence of lipid-binding proteins such as the lipocalins .

The existence of different routes for the synthesis of anandamide and 2-AG suggests that these two endocannabinoids could in principle operate independently of each other. This idea is supported by three main findings. First, electrical stimulation of hippocampal slices increases the levels of 2-AG, but not those of anandamide . Second, activation of dopamine D2 receptors in the striatum enhances the release of anandamide, but not that of 2-AG . Third, activation of NMDA receptors in cortical neurons in culture increases 2-AG levels but has no effect on anandamide formation, which requires instead the simultaneous activation of NMDA and -7 nicotinic receptors . It is unclear at present whether these differences reflect regional segregation of the PLC/ DGL and PLD/NAT pathways, the existence of receptor activated mechanisms linked to the generation of specific endocannabinoids, or both.Carrier-mediated uptake into nerve endings and glia, probably the most frequent mechanism of neurotransmitter inactivation, is also involved in the clearance of lipid messengers. This idea may appear at first counter intuitive: why should a lipid molecule need a carrier protein to cross plasma membranes when it could do so by passive diffusion? A large body of evidence indicates, however, that even very simple lipids such as fatty acids are transported into cells by protein carriers,mobile grow system several families of which have now been molecularly characterized . Indeed, carrier-mediated transport may provide a rapid and selective means of delivering lipid molecules to specific cellular compartments . Thus it is not surprising that neural cells might adopt the same strategy to interrupt lipid-mediated signaling .Anandamide transport differs from that of amine and amino acid transmitters in that it does not require cellular energy or external Na, implying that it may be mediated through facilitated diffusion . Because anandamide is rapidly hydrolyzed within cells , it is reasonable to hypothesize that intracellular breakdown contributes to the rate of anandamide transport. Accordingly, HeLa cells that overexpress the anandamide-hydrolyzing enzyme FAAH also display higher than normal rates of [3 H]anandamide accumulation . However, in primary cultures of rat neurons and astrocytes or in adult rat brain slices, FAAH inhibitors have no effect on [3 H]anandamide transport at concentrations that completely abrogate [3 H]anandamide hydrolysis . From these results it is reasonable to conclude that anandamide transport in the CNS is largely independent of intracellular hydrolysis.

Whether persistent disruption of FAAH activity may eventually change the distribution of anandamide between intracellular and extracellular pools is an interesting question that warrants examination. The substrate selectivity of anandamide transport has been investigated in rat cortical neurons and astrocytes and, more systematically, in human astrocytoma cells . In the latter model, [3 H]anandamide uptake is not affected by a variety of lipids that bear close structural resemblance to anandamide, including arachidonic acid, PEA, ceramide, prostaglandins, leukotrienes, hydroxyeicosatetraenoic acids, and epoxyeicosatetraenoic acids. Furthermore, [3 H]anandamide accumulation in these cells is insensitive to substrates or inhibitors of fatty acid transport , organic anion transport , and P-glycoproteins . However, [3 H]anandamide uptake is competitively blocked by nonradioactive anandamide and by the anandamide analog N–arachidonamide  . A similar sensitivity to AM404 has been reported for rat cortical and cerebellar neurons , rat cortical astrocytes , and rat brain slices . Inhibitory effects of AM404 on anandamide accumulation also have been observed in a number of nonneural cells, although the concentrations of AM404 needed to produce such effects are generally higher than in neurons . Together, these data are consistent with the view that anandamide is internalized by neurons and astrocytes through a selective process of facilitated diffusion. The molecular identity of the protein responsible for this process is, however, unknown.Anandamide and 2-AG share three common structural features: 1) a highly hydrophobic fatty acid chain, 2) an amide or an ester moiety, and 3)a polar head group . Systematic modifications in the hydrophobic carbon chain indicate that the structural requisites for substrate recognition by the putative anandamide transporter may be different from those of substrate translocation. Substrate recognition may require the presence of at least one cis double bond in the middle of the fatty acid chain, indicating a preference for substrates with a fatty acid chain that can adopt an extended U-shaped conformation. In contrast, a minimum of four cis nonconjugated double bonds may be required for translocation, suggesting that a closed “hairpin” conformation is required in order for substrates to be moved across the membrane . Molecular modeling studies show that transport substrates have both extended and hairpin low-energy conformers . In contrast, extended but not hairpin conformations may be thermodynamically favored in pseudo-substrates such as oleoylethanolamide, which displace [3 H]anandamide from transport without being internalized . The effects of head group modifications on anandamide transport have also been investigated . The results suggest that ligand recognition may be maintained when the head group is removed , or replaced with substantially bulkier moieties , and when an ester bond substitutes the amide bond . Notably, ligand recognition appears to be favored by replacing the ethanolamine group with a substituted hydroxyphenyl group [as in AM404 and its derivative N-arachidonamide or a furane group ] .Biochemical experiments have demonstrated the existence of anandamide transport in primary cultures of rat cortical neurons and astrocytes , as well as rat cerebellar granule cells . But what brain regions express the transporter is still unclear, primarily due to a lack of molecular understanding of the transporter involved in this process. In one study, the CNS distribution of anandamide transport was investigated by exposing metabolically active rat brain slices to [14C]anandamide and measuring the distribution of radioactivity by autoradiography. The CB1 antagonist SR141716A was included in the incubation medium to prevent binding of [ 14C]anandamide to CB1 receptors, and AM404 was used to differentiate transport-mediated [14C]anandamide accumulation from nonspecific association with cell membranes and cell debris . These experiments suggest that the somatosensory, motor, and limbic areas of the cortex, as well as the striatum, contain substantial levels of AM404-sensitive [14C]anandamide uptake. Other brain regions showing detectable transport include the hippocampus, the amygdala, the septum, the thalamus, the substantia nigra, and the hypothalamus .Although a variety of compounds have been shown to inhibit anandamide transport, the anandamide analog AM404 remains a standard of reference, mainly because of its relatively high potency and its ability to block anandamide transport both in vitro and in vivo . AM404 inhibits [3 H]anandamide uptake in rat brain neurons and astrocytes , human astrocytoma cells , rat brain slices , and a variety of nonneural cell types . The inhibitor also enhances several CB1 receptor-mediated effects of anandamide, without directly activating cannabinoid receptors . For example, AM404 increases anandamide evoked inhibition of adenylyl cyclase activity in cortical neurons , augments the presynaptic inhibition of GABA release produced by anandamide in the midbrain periaqueductal gray , and mimics the effects of cannabinoid agonists on hippo campal depolarization induced suppression of inhibition .

There are several alternative psychometric measurement models that can operationalize a latent construct

These manuals provide an evolving checklist of possible indicators of drug abuse and/or drug dependence, some subset of which will trigger a distinct categorical diagnosis. Because these diagnoses are seen as consequential for clinicians, clients, treatment facilities, third-party payers, and for the development of addiction science, the years preceding each revision always see a lively and vigorous debate among experts about which indicators of substance abuse and dependence – e.g., withdrawal, tolerance, cravings, legal problems – do or don’t belong in the checklist. To an outside observer, the process can appear chaotic and as political as it is scientific. But somehow, the resulting checklist seems to have a noteworthy psychometric property. Using popular psychometric methods, it has been argued that the DSM diagnostic criteria for substance dependence or a substance use disorder form a unidimensional scale – implying that they are tapping a single, coherent latent construct, either “substance abuse” , “substance dependence,” or in the newest iteration, the combined construct “substance use disorder” [e.g., ]. But there is something odd about this. If indeed the DSM criteria form a unidimensional construct, then there should be little reason to spend years debating specific items to include in the construct. Under the measurement model that characterizes most psychometric analyses of DSM data, these indicators should be roughly interchangeable, in the same way that different items on an attitude scale, vocabulary test,cannabis grow equipment or personality trait inventory tap different manifestations of the same underlying construct. And the corollary observation is that if the criteria that get debated – withdrawal, tolerance, craving, and the like – are indeed conceptually and empirically distinct , then the evidence for the unidimensionality of the DSM criteria is perhaps puzzling or even troubling, rather than reassuring.

This essay does not contend the DSM diagnostic criteria are foolish or meaningless, or that adopting them was a serious mistake by some criterion of harm to patients. Rather, I argue that there is confusion about the underlying structure of the DSM substance-related diagnostic criteria, and greater clarity might promote the development of better science, better practice, and better inputs to management and policy making. These are analytic issues that deserve attention in the coming decade, in anticipation of the eventual next iteration, DSM-6. What would it mean for a list of such criteria to constitute a unidimensional latent construct?They quite literally imply different metaphysical assumptions – ontologically, what construct exists, and epistemologically, how to do we identify it? – but also different mathematical definitions. The discussion that follows gets slightly technical, and requires a few simple equations, but to keep things simple I assume there is only one latent construct and that the terms in the model have unit weights . Traditional factor-analytic models are usually specified mathematically as a set of structural equations of the form Xi = F + ei , where each X is one of i observed or “manifest” variables , and F is the underlying latent construct thought to cause each X to take on its observed values [e.g., ]. Importantly, the e terms reflect any idiosyncratic variance associated with the observed variables but not caused by the underlying latent construct of interest. This has an important implication; if any two observed variables share a common latent factor, it is assumed that these variables share nothing systematic in common other than that factor – they are “conditionally independent” unless the default assumption of uncorrelated error terms is explicitly overridden. Any model with these features is now commonly referred to as a“reflective model”. The reflective model is a method of constructing unidimensional composite scales and justifying their interpretation as such.

The most common theoretical justification for this interpretation is the domain sampling assumption that the observed variables we retain as indicators of the latent construct are essentially interchangeable exemplars sampled arbitrarily from a much larger domain of possible expressions of the construct. “The model of domain sampling conceives of a trait as being a group of behaviors all of which have some property in common. . . .If the sample [of indicators] we draw from domain is representative, than its statistical characteristics are the same as those of the total domain” [, p. 211–212]. Specifically, in expectation any sufficiently large random sample of indicators from the domain should yield the same average value, and the same correlations among indicators. This notion of sampling is of course hypothetical, not literal, and that creates an important conceptual twist: “Instead of specifying a population of some set of entities and then drawing a sample randomly from it, . . . we have a sample in hand that in turn implies a population . . . having the same characteristics as the sample” [, 214 p.]. Most of the published psychometric analyses of DSM criteria that I have examined adopt the reflective model of factor analysis, without explicit justification. But this creates an unacknowledged conceptual puzzle: according to that model, any differences between two criteria – say, withdrawal symptoms vs. interference with important activities – are simply part of the error structure of the model rather than the construct itself, or its composite score. In other words, the distinctive features of each criteria that form the basis for expert debates about DSM construction are actually irrelevant to the model. Under the domain sampling assumption, there should be relatively little to argue about; we can inductively generate large sets of candidate criteria and simply cull out the ones that don’t “load” on the common factor . I very much doubt this is how most DSM experts view the diagnostic criterion list, yet this is how the analyses treat it. There is a less familiar alternative way of specifying a latent factor model – the “formative model” [Figure 1B; see Ref. ]. This model is superficially similar – it consists of the same observed indicator variables, plus one or more latent factors, and an error term.

But the assumptions are quite different. In a formative model, the latent factor does not cause the observed variables; rather, they cause – or more accurately, “constitute” – the latent factor. Mathematically,cannabis cultivation technology the model would be represented by an equation of the form F = ΣXi+e but F is now the dependent variable, and there is a single error term for the factor, rather error terms for each observed variable . That means that anything distinctive or idiosyncratic that distinguishes two observed variables – say, withdrawal symptoms vs. interference with important activities – is part of the construct and its measurement. As a result, formative models are not assumed to be “unidimensional” and indeed, some heterogeneity among the criteria is seen as desirable. Formative models are not inductive, at least not in the sense that a latent construct emerges from the observed variance of a reflective model. Rather, formative models are a form of “measurement by fiat.” The analyst, or some other authority, decrees that certain observable criteria will collectively constitute what the latent construct actually means. An example is professional accreditation . A formative model seems to better capture the way many psychiatrists actually debate the DSM criteria, and it also better characterizes the actual decision process – organizational fiat – that determines which criteria are included vs. excluded. But a growing number of simulation studies show that when data actually have a formative structure, fitting them using reflective models can lead to significantly biased and misleading estimates of model fit and factor scores [e.g., ]. Whether this helps to explain the puzzle I noted earlier – high unidimensionality despite what are surely conceptually distinct DSM criteria – would probably require focused re-analysis of major DSM data sets1 . There is, however, an alternative psychometric model that might produce unidimensionality despite conceptually distinct measures – a Guttman scale [or a stochastic variant, the Mokken scale; see Ref. ]. As suggested by Figure 1C, the variables in a Guttman scale have a cumulative structure. An example might be a diagnosis of AIDS; anyone who has the disease AIDS is infected with HIV, and anyone who is infected with HIV was exposed to HIV at some earlier date when they were still HIV-negative. Thus if we determine that someone is HIV-positive, we can conclude that they were exposed to HIV, but we cannot conclude that they have AIDS or will necessarily have AIDS in the future.

Like formative models, Guttman scales can emerge by fiat: with rare exceptions, we decree that those with a Ph.D. must have a Bachelors Degree, and that those with Bachelors Degree must have completed high school. Alternatively Guttman-scaled phenomena can emerge through a chain of causal processes that occur in a consistent order. If the DSM criteria formed a clear Guttman scale, this might provide a tidy resolution to the puzzle noted at the outset – the fact that psychiatrists argue over the distinct features of DSM criteria and yet claim that the DSM provides a unidimensional diagnosis of substance use disorder. But the empirical literature is not encouraging. I have only been able to locate two studies that test whether the DSM substance-use criteria form a Guttman scale. Kosten et al. computed Guttman scale scores using DSM-III-R criteria for each of seven substance classes for 83 psychiatric patients. Carroll et al. followed the same procedures using DSM-IV criteria for six substance classes for 521 people drawn from a variety of different clinical and general population sources. Across four substance classes, the Guttman reproducibility coefficients averaged 0.89 for the DSM-III-R study and 0.80 for the DSM-IV study. Common benchmarks for this coefficient are 0.85 or 0.90; diagnoses met the lower standard in both studies for alcohol, cocaine, and the opiates, but not for sedatives, stimulants, or marijuana. More troublingly, if we limit the focus to four criteria that are roughly the same in both version of the DSM – withdrawal, tolerance, “giving up activities,” and “use despite problems” – their relative rankings within a given substance category are inconsistent across the two studies, with correlations ranging from −0.57 to 0.11 . Granted, some differences are expected due to differences in year and sample, but it is difficult to see anything like a coherent Guttman measurement model either within or across substance categories. Another source of evidence comes from comparisons of the prevalence of each criterion, by substance, in different studies. If the criteria come close to forming a Guttman scale, then different studies should find a similar ordering, with the prevalence of some criteria being consistently higher than other criteria . I compared prevalence estimates from three samples . The average correlations of the criterion ranks across studies were only 0.54, 0.32, and 0.25 for cannabis, opiates, and cocaine, respectively. One reason why items have Guttman scale properties is if they have form a simple causal chain , but the evidence against a Guttman scale interpretation, reviewed above, also casts doubt on any simple causal model. Figure 1E shows a causal model that is more complex than a simple causal chain. Even a cursory examination of the DSM substance-use criteria suggests that they might have this kind of complex internal causal structure. First, many of the criteria require the clinician or the patient to make causal attributions: e.g., “Recurrent use resulting in a failure to fulfill important role obligations” or “continued use despite recurrent social problems associated with use” . Second, many of the criteria are likely to have causal linkages to each other. For example, tolerance implies that the user will seek larger doses, which might well increase the time taken to obtain the drug will increase. Withdrawal symptoms and craving have long been implicated in income-generating crime, needle sharing, prostitution, and other forms of physically hazardous and socially dysfunctional behavior [e.g., ]. Third, in addition to any psychopharmacological mechanisms, most of the criteria are causally influenced by the social, cultural, economic, and legal context in which substance use takes place [see Ref. ]. A striking illustration comes from clinical trials for heroin maintenance in Europe ; when registered addicts are allowed easy access to high-quality heroin, their criminality drops, their health improves, and they are increasingly like to hold a job.

That value is the negative slope of the best-fit line

The estimated concentration at this time would be the best estimate of the concentration if perfect instantaneous mixing had occurred. This estimated peak concentration was then multiplied by the room volume to provide an estimate of the source strength . Decay rates were determined by regressing the logarithms of the background-corrected concentrations over time. These rates reflect the effects of all particle dynamics on a given day, which include the deposition rates on room surfaces, the air exchange rate, condensation/evaporation of volatile substances, and coagulation. These mechanisms are also affected by environmental conditions including air flow rates in the room, temperature, and humidity. Removal rates were determined by subtracting the air exchange rates from the measured decay rates. Decays were followed for multiple hours. Over an 8-day period, a study of 24-h exposure was carried out using the PurpleAir monitors. Each day, a single puff of marijuana fluid was exhaled into this 30 m3 room following Protocol II . The door was closed during the 6 h following the puff, and open after that. Air exchange rates were measured by releasing a 10% mixture of CO into the room just prior to vaping and calculating the decline of the background-and temperature-corrected CO concentrations. Prior to the study and during it at intervals, two of the main monitors were zeroed,cannabis indoor grow system their impactors were cleaned and regreased, and flow rates checked. Since at least 2 monitors of each type were collocated, the agreement within each type could be determined, as well as the relative bias and precision .

For the two PurpleAir monitors at each location, there were two independent lasers within each monitor, so each monitor could be checked for internal agreement, and they could also be checked against each other.As part of that grant, the Stanford Institutional Review Board gave approval to the authors to carry out human experimentation. Since no human subjects were recruited for the experiments presented in this paper, telephone contact was made with a member of the IRB to obtain his opinion on whether IRB coverage of the authors was needed by the IRB. His advice was that IRB review is not required if the researchers doing the study are the only human subjects. In addition, the research is not medical, since its focus is on evaluating measurement methods and applying mathematical approaches to a class of indoor air pollutants, not on health impacts for humans. Finally, the emissions of every experiment were produced by a subset of the authors, who were experienced in inhaling both nicotine and marijuana smoke, and no persons were present in the room during the air pollutant decay periods. No other individuals participated in the smoking or vaping activities, nor were any persons other than the authors exposed to the aerosols produced. The source strengths for the high-heat Protocol II are about 3 times those for the low-heat Protocol I . These values of 3–9 mg/puff may be compared to measured values on the order of 1.4 mg/puff for a tobacco cigarette . The 3 instruments show fairly similar means, medians, and ranges, with no instrument significantly different from any other. The low-cost PurpleAir instrument also shows equally good coefficient of variation , as the two higher-cost instruments. Table 1 suggests that both the research-grade and low-cost optical air monitors can reasonably be used to determine the source strengths of marijuana vaping when applying suitable calibration factors to each monitor.

The calculated source strengths for the PurpleAir and SidePak instruments agreed well, with a slope of nearly 1 and an R2 value of 99% . Decay rates were measured for all experiments. The measured decay rates for the SidePak and PurpleAir monitors include air exchange rates a and deposition rates k, as well as other possible losses or gains due to evaporation, condensation, and coagulation: decay rate = a + k + other. If we subtract the observed air exchange rate from the observed decay rate, we are left with a term we call the “removal rate”, which is the sum of the deposition rate k and all other gain/loss mechanisms. The measured “decay rates” for the Piezobalance are actually the rates of mass accumulation on the crystal, and are affected by the evaporation from the crystal as well as the losses of the airborne fraction . The SidePak and PurpleAir decay rates were consistently maintained at a constant slope over the entire time following a single puff of the heated marijuana liquid—a time that extended from 1 to 8 h. The R2 values for these regressions were very high at an average R2 of 98%. However, the “decay rates” for the Piezobalance were typically constant for experimenter took a single puff from the vape pen with an 8:1 CBD/THC ratio at around 12:20 p.m. and then stepped out of the room. For the next 5 h, the aerosol was allowed to decay undisturbed. At around 5:40 p.m., the experiment was repeated. As can be seen, following an initial “false peak” due to unmixed air conditions, the SidePak decay rate settles down to a single value of about 0.52 h-1 for the 5 h.The line can be extended backward in time to the time of the puff. The intersection of that line and the time of the puff is the “true peak” that would have occurred under instantaneous perfect mixing. From the graph, the peak was about exp or 221 μg/m3 . In the second experiment, the decay rates were virtually identical . The “true peaks” were also similar and exp) or 221 and 245 μg/m3 . The Piezobalance also shows similar peaks for the two experiments, and similar decay rates following each puff. However, after the first hour, the decay rates increase and eventually the “concentration” goes to zero . Therefore, for the Piezobalance throughout all experiments, the decay rates were only calculated for the first 20 min following a puff.

This behavior is expected for volatile particles collected on the crystal undergoing evaporation, and can result in overestimating the actual aerosol decay rates . Decay rates are strongly affected by air movement. The use of a table fan approximately doubled the decay rates for both the SidePak and the Piezobalance . The use of two fans led to a further increase that was not, however, statistically significant. For building models of indoor exposure,cannabis equipment it is important to measure a and k separately. The mean air exchange rate was 0.128 h-1 . Data from the 8-day exposure experiment were examined to identify the time of elevated concentrations. Concentrations were considered “elevated” if they were associated with the time of vaping and were higher than typical concentrations during non-vaping periods . The elevated periods had a mean concentration of 62.9 μg/m3 and lasted about 9 h each day . The background concentration for PM2.5 was 4.5 μg/m3 . The 24-h average PM2.5 concentration was 26.5 μg/m3 and may be compared to the 24-h average outdoor standard of 35 μg/ m3 . The four main monitors used in this study had complementary strengths and weaknesses. Both optical monitors were able to count particles and estimate particle volumes. However, the resulting PM mass could not be determined from these two monitors without the use of a calibration factor. The mass could be determined both from the pump-filter and Piezobalance results. However, evaporation may be an important process and could only be determined from the Piezobalance. On the other hand, because evaporation from the Piezobalance presumably started soon after aerosol collection, only the early measurements by the Piezobalance could be used to estimate mass. In terms of 24-h average indoor concentrations, only the PurpleAir monitors could be operated continuously for so long, with the other monitors needing considerable downtime for maintenance. Two results from this study are required for building indoor air quality and exposure models: source strengths and removal rates. The source strengths shown by the SidePak and PurpleAir monitors ranged from 3 to 8.8 mg/puff. This is roughly 2–6 times that of tobacco cigarettes on a per-puff basis. Longer heating periods and correspondingly higher temperatures produced significantly higher source strengths, by about a factor of 3 going from 6 to 15 s of heating time. The example from this study showed that concentrations from a single puff could be elevated for about 9 h. Wu et al. found similar results comparing tobacco and marijuana smokers, with the marijuana smokers inhaling about 3 times the amount of tar and accumulating 30% more tar in the respiratory tract. Ott et al. used SidePaks in 60 experiments and found emission rates of 3.4–7.8 mg/puff for four sources of marijuana consumption compared to 2.2 mg/puff for a tobacco cigarette. Combination antiretroviral therapy has allowed people living with HIV/AIDS to maintain a high quality of life and achieve life expectancies near those without HIV . The optimal benefits of ART, however, cannot be attained unless persons living with HIV/AIDS are able to adhere to ART as prescribed . Non-adherence to ART is associated with poorer HIV suppression, decreased CD4 cell count, and an increased risk for antiretroviral drug resistance .

In addition to their impact on individual disease progression and health status, these consequences of ART non-adherence can also increase the likelihood of HIV transmission. For these reasons, accurate treatment monitoring with the goal of optimizing ART adherence is an issue of major relevance to the clinical care of PLWHA. Alcohol use is common among PLWHA, with an estimated prevalence of current heavy drinking of almost twice the rate of the general population . A recent national cohort study found that nearly half of PLWHA have a lifetime history of an alcohol use disorder . Excessive alcohol exposure in the context of HIV disease has been linked to higher viral replication and accelerated disease progression . Although the mechanisms underlying the impact of alcohol use on HIV disease remain under investigation, ART adherence is likely an important contributory factor . For example, one study examining the relationship between alcohol use and HIV disease progression found no direct association between alcohol consumption and CD4 cell count in participants receiving ART when controlling for ART adherence . Another recent study also failed to show a direct relationship between alcohol use and viral load detectability when controlling for other factors . The most robust evidence of the association between alcohol use and ART non-adherence in the literature finds that those who report any alcohol use are significantly more likely to be non-adherent compared to those who completely abstain from drinking , regardless of the amount of alcohol used . Although there is research examining the influence of level of alcohol use on medication adherence, results are inconsistent. Some studies have found significant associations between level of alcohol use and medication adherence , while others have found no difference in medication adherence between those with different levels of alcohol use . These studies have also measured level of alcohol use in many different ways. Some studies used continuous variables such as frequency of alcohol drinking and quantity of drinks , whereas others used categorical variables indicating the presence of an alcohol use disorder , binge-drinking , and risky drinking, which is often defined by a certain quantity of drinks per day or per week, though this quantity is also inconsistent across studies . A meta-analysis, attempting to shed light on these inconsistencies among PLWHA, found that individuals who either meet probable current AUD diagnosis, or the National Institute on Alcohol Abuse and Alcoholism criteria for at-risk drinking were about half as likely to be adherent to medications compared to otherwise . A recent systematic review also consistently found that AUD diagnosis was associated with decreased ART adherence across studies; however, AUD diagnostic criteria differed across studies . These findings still do not clearly parse apart different levels of alcohol use in association with ART adherence in the context of those who use alcohol. The current study examined the association between at-risk alcohol use and ART adherence among PLWHA who reported drinking at least once in the last 30 days. By excluding non-drinkers, this study attempted to elucidate whether at-risk drinkers were more likely to be ART non-adherent compared to those who drank less .

Prior attempts to use field olfactometry offered no advantages to the above method

Two major case studies from SCAQMD are informative. The first case is metallic odors in the industrialized section of the City of Paramount that began in 2013 . Nickel and hexavalent chromium were detected in air samples. Three businesses with metal-related operations were identified and many community meetings were held to address both odor and air toxics concerns. Under an Order for Abatement, one company was required to take measures to reduce odors in July 2017. They improved air pollution controls in their grinding room and made other improvements, and the number of odor complaints decreased. The second case is the coastal area of Seal Beach, Huntington Beach, and Long Beach that experienced “gas/sulfur/chemical” odors . To help find the allusive source, sampling was performed between March 2017 and October 2018 in partnership with local fire departments. SUMMA canisters were analyzed for the presence of volatile organic compounds, and samples collected in Tedlar™ bags were analyzed for total reduced sulfur compounds. Crude oil was the likely source of the compounds detected, and in October 2018 a violation was issued to an oil tanker that, upon inspection, had 7 out of 10 pressure release devices leaking . Portable hydrocarbon-sensing devices and gas imaging cameras were used to detect the leaks. Monitoring for other sources using a forward-looking infrared camera and further sample collection are ongoing. To address the beach communities’ health concerns from the intermittent exposures,grow rack and to put the monitoring data into context, a “frequently asked questions” document was created .

The conclusions were that the levels of hydrogen sulfide were below the 30 ppb one-hour state standard, the levels of specific hydrocarbons were below their acute limits, and that cancer risk was not at a level of concern due to the intermittent nature of the exposures.The Bay Area Air Quality Management District , headquartered in San Francisco, sets an odor limit of 5 dilutions-to-threshold at or beyond the facility fenceline, which is applied after at least 10 complaints are received within a 90-day period . Further investigation is required if a further 5 complaints are received within the next 90 days. Interviews with staff provided recent information and insights into investigation techniques currently used. Although still found in their local regulations, the use of odor panels to evaluate samples captured in bags ended over five years ago. The primary concern was that employees, who served as the panelists, were worried about exposures to unknown compounds and experienced negative sensations. Using air monitoring results for specific odorants and comparing the concentrations to odor detection threshold concentration was considered not in line with odor being a sensory nuisance . Lacking any quantifiable sensory method, air inspectors now conduct source-by-source investigations using their own sense of smell with moderate success. An inspection is triggered when 5 or more independent complaints are received and can result in a Violation Notice. Although their “Odor Policies and Procedures” is currently being revised, BAAQMD has several experiences worth sharing. When an odor complaint is received during office hours, within 15 minutes it is assigned to an air inspector who has 30 minutes to contact the complainant.

They then meet, experience the smell together and walk toward the source together. No sensory descriptors or training are used. They also explore upwind. If the odor is verified, the source is contacted and, if needed, inspected to see if in violation of the permit.A database developed in-house manages the case load. Regarding odor management plans that facilities have submitted, the air inspectors found substantial difficulties. Such plans are difficult to enforce, different for each site, written by a third party , and require large amounts of staff review time. Regardless, they are commonly found at WWTPs and trash transfer stations. A prior contract with Envirosuite Inc. was not continued due to the substantial requirements for meteorological data. The project tried to use advanced backtracking technology, based on real-time fine-scale meteorological modelling, to instantly plot and visualize the trajectory of an odor complaint, thereby identifying its likely cause. A current challenge is the overlapping odors found in Milpitas, which have resulted in thousands of complaints. To fingerprint and identify which sources are contributing to the ongoing odors, BAAQMD issued in March 2019 a request for proposals . A community group that meets quarterly is conducting a parallel study. Another challenge is the increase in composting operations, which are often malodorous. Finally, an emerging area of concern is cannabis cultivation, particularly within and around Santa Rosa, California. To understand the operations, BAAQMD staff toured a cannabis cultivation facility and will visit again.To become a “verified complaint,” air inspectors from the Sacramento Municipal Air Quality District rely on the definition of nuisance being about the complainant’s perception and try to verify that. Upon meeting the complainant, the air inspector logs their own description of the odor and sees if it matches that of the complainant. Standardized smell vocabulary would be useful.

A verified odor nuisance can lead to a Notice of Violation, which in turn can lead to an Abatement Order that shuts down the facility. The owner must then sue to reopen the facility. SMAQMD is considering trying a field olfactometer like neighboring air districts have. Previous tests conducted at the source of an odor need to be translated to fenceline concentrations experienced by the neighborhood. A current challenge is a rendering facility. California law, the Right-to-Farm Act, exempts such facilities from nuisance law, however. Another challenge is a sweet potato drying operation that has rotting odors. Also, the Zero Waste initiatives are sending more scraps to compost facilities, which are exempt from air regulations, so solid waste programs handle the complaints . Even pleasant aromas can become nuisance odors. A blueberry smell from a food factory was intense enough to trigger migraines. Masking has been done using cherry “perfume” at a solid waste landfill. Complaints are logged into a database that contains over 13,000 records since 1996. The data was transitioned from MS Access to MS Share point,microgreens shelving which sends automated e-mails and allows for logging from the field.Air inspectors from Placer County Air Quality Management District respond to all odor complainants. They drive to the complainant and try to verify the odor, even if they need to wait for an intermittent odor. If detected, they then try to identify the source. No cases have ended up in court, just resolved by mutual settlement. Past experience with a field olfactometer was not helpful due to the transient nature of most odors. The major source of odor complaints is a landfill. The dispersion modelling was not helpful, and sampling was sent for odor intensity testing by a panel. A spray coating did not stop the odor, nor did a cherry/citrus mist at the perimeter. The latter actually magnified the odor. Another nearby source, bioenergy wood piles, was blamed for the odors by the landfill.The San Joaquin Valley Air Quality Management District covers eight counties. Once a complaint is received, the air inspector visits the complainant to determine the type of odor, time of day and GPS location. Driving around, the source is identified. If the source is a permitted facility, the air inspector goes on-site to review the permit and inquire about any upsets or disruptions. If not permitted, the air inspector talks with the source and offers compliance assistance and education. The air inspector always circles back to the complainant to communicate the findings. A senior air inspector often accompanies a new inspector to provide mentoring. Once source of odor complaints was from residents who disagreed with urban planning decisions. Other complaints were from a WWTP that had a pond go anaerobic and a rendering plant, which is now closed. Their database could serve as a model for other air districts. It was developed in-house, which was expensive, but the functionality is impressive.

The database is on-line, smart phone friendly, interfaces with their app and, thus, can collect photos and videos. The system sends automated notifications to inspectors by text or e-mail. It also includes mapping features.Air inspectors find it sufficient to use their own sense of smell to verify odor complaints and try to respond to each call . Their familiarity with the local sources helps resolve most issues . Both field dilution measurements and laboratory panels did not aid investigations, nor did traditional dispersion modeling . The use of standardized odor descriptions may help inspectors , but agreeing with the complainant on the description is unnecessary. Masking odors to cover up malodors can magnify the problem . The FIDOL framework or a variation thereof is used by many jurisdictions. It is sometimes incorporated into guidance for complainants, too. The terms cover the main contributing factors or nuisance odors, namely frequency , intensity , duration , offensiveness , and location . FIDOL fails to capture, however, some of the more personal aspects of odor perception. The underlying mood and coping strategy of the complainant are missed, as is any connection or history with the offending source. The person’s history and susceptibility to malodor effects – such as age, sex, and health – are not included, nor is past experience with the same or different odors, which can lead to sensitization or trigger memories. These factors, which serve as confounders to odor investigations, relate to properties inherent to odor yet not present for classical air pollutants . FIDOL also neglects the number of people impacted. The main weakness may be in the variability inherent in two of the factors: intensity and offensiveness.Jurisdictions tend to use the same database software to manage odor complaints as they do for any air pollution concern. Development is typically through software vendors but may be done in-hose if sufficient funds are available . Such data is valuable for tracking trends in odor complaints. More sophisticated evaluations than the number of complaints are needed, however, to capture the actual impacts. Software vendors also support facilities so they can manage complaints in-house and track their own data. Such systems are required in certain jurisdictions, such as Colorado. In addition to time trends, a complaints database can be analyzed statistically or visually. For drinking water, six utilities were included in a study of customer complaints . The data were evaluated using several statistical methods, and the same could be done for done for an odor complaint database. A combination of high frequency of complaints together with consistency of descriptors was indicative of episodic water quality problems. Another way of analyzing odor complaint data is through a “word cloud,” as was done for a chemical spill in a river . See Figure 2.3 for an example where the size of the font indicates word frequency.The objective of this paper is to gather and review the technical approaches currently used to measure and monitor exposures to environmental odors. The technical approaches from around the world will be evaluated from a scientific standpoint as well as by applying practices from risk assessment, its framework and conventions, where appropriate. The goal is to identify best practices, identify any gaps and suggest how such gaps could be filled. The ultimate finding would be a universal approach that can be used for any odor complaint. This requires integrating disconnected research fields, not unlike the risk assessment of exposures to toxic chemicals has attempted. Investigations of nuisance odor complaints are the focus of this paper rather than predictive emission and dispersion modeling used to grant industrial facility permits. Some overlap between methods, however, exists.Odors are complex mixtures that evoke complex responses. There is no single parameter that completely characterizes the exposure to and impact of an odor . Unlike vision and hearing, the language of odor perception is poorly developed. Some people have a sense of smell that is orders-of-magnitude more sensitive than others, and the offensiveness of a smell is highly personal and culturally based. Even the microbes living in the nasal cavity can influence a person’s sense of smell . Such variability applies equally to air inspectors as it does to the general public.

The Cannabis sativa plant contains bioactive components termed cannabinoids

To rule out potential false positive results from multiple tests of different modulators, all significant modulating variables were then included one omnibus regression model to evaluate the collective and individual residual effects of these multiple factors. We examined the effects of several potential modulating variables, to determine how they independently influenced P300 amplitude and whether they contributed to observed site differences. Each variable was added as a single additional factor to the original multi-factorial model. MMSE score [F=4.79. p<0.05] was positively associated with P300 amplitude independent of diagnosis, but had no impact on observed site differences. In separate Other measures of cognitive and functional status – GAF score and education – were not significant predictors of P300. Smoking, however, was found to be a robust modulator of P300 amplitude [F1,1195)=10.34, p<0.01]. Although the interaction between smoking and diagnosis was not significant [F=2.44, p=0.12], separate within-group analyses of smokers and nonsmokers revealed a significant patient control difference only among nonsmokers [F=29.69, p<0.00001]. Smoking differentially reduced P300 amplitude in healthy control subjects while having little effect in patients , which eliminated all diagnostic differences [F=1.09, p=0.30]. However, site differences remained robust even after controlling for smoking status. It should be noted, though,greenhouse rolling racks that only 70 control subjects were classified as smokers, compared to 50% of patients. The observed site differences appeared to primarily reflect racial stratification differences.

Inclusion of race as an additional predictor produced a significant race effect [F=16.29, p<0.000001], which eliminated the site effect while leaving the effects of both diagnosis [F=20.64, p<0.00001] and age [F=15.56, p<0.0001] intact. P300 amplitude was lower, overall, in the African American sample than in either the Caucasian or “Other/Mixed” racial groupings. There was a clear trend towards an interaction of race × diagnosis, but it did not reach statistical significance [F=2.85, p=.06]. In separate analyses, significant patient-control differences were observed within each racial subgroup, although the effect size was noticeably smaller within the African American sample . The differential impact of race on the association between schizophrenia and P300 was manifested primarily as an amplitude reduction among African American controls, rather than patients. Further consideration of potential modulating variables revealed that this apparent racial difference was due, in part, to the differential impact across the racial groupings of prior substance use disorders. When the sample was restricted to subjects with no history of substance use, the interaction of race × diagnosis was insignificant [F=1.56, p=0.21] and the magnitude of the patient-control difference was similar across racial categories . However, among those with a past history of substance abuse or dependence, there was a significant race × diagnosis interaction [F=6.77, p<0.001]. As illustrated , P300 responses of otherwise healthy African American controls were indistinguishable from those of African American schizophrenia patients. Comparable attenuating effects of past substance use were not observed for the Caucasian or Other/Mixed control samples. . The only significant interaction was diagnosis × substance use.

The 3-way interaction of diagnosis × African American race × substance use just missed significance threshold. The principal aim of this analysis was to determine the feasibility of acquiring comparable P300 data across multiple testing sites, both with and without specific ERP expertise, and to examine the clinical and socio-demographic factors that modulated the measurements across sites. Comparability across sites is a necessary predeterminant of the measure’s utility as an endophenotypic biomarker. To that end, the results are both very encouraging and somewhat cautionary. Across sites, 92% of subjects yielded technically acceptable EEG recordings with identifiable auditory evoked potential wave forms. Additional data loss, beyond this, resulted from 1)our failure to monitor subjects’ behavioral responses online in real time , and 2)our conservative strategy of rejecting any data lacking a reliable visibly identifiable P300 component. Many studies use an automated algorithm to measure P300 amplitude regardless of waveform appearance. Such an approach would have increased our final data yield from 74% to 85%. Given this overall yield, the fact that data quality did not differ between sites with or without prior electrophysiology experience, and the fact that the schizophrenia P300 deficit was replicated at each site, this study clearly demonstrates the feasibility of implementing large-scale ERP studies across diverse settings. The overall case-control effect size that we observed, 0.62, was somewhat lower than that reported in meta-analyses . Since the patient sample was older than the control sample and age significantly affected P300, the patient control difference was attenuated somewhat by inclusion of age as a covariate.

The effect size was almost certainly also lowered by our conservative data strategy, which likely excluded a number of subjects – primarily patients – with negligible but real P300 responses. This moderately large effect is, nevertheless, well within the expected distribution of published studies. Although we observed a significant difference across test sites, this did not reflect differences in data quality, methodology, or experimental rigor. Rather it reflected differences in the stratification of the samples across sites, as this relates to clinical and socio-demographic confounds or modifiers. In patients, site differences were entirely explained by differences in the level of positive symptomatology. Although the P300 deficit is traditionally thought of as being immune to changes in patients’ clinical status , it should probably be considered as more of a relatively stable deficit. It clearly does not normalize with treatment, even when symptoms dramatically improve. However, it still exhibits modulation over time in association with positive symptoms . Indeed, it is this ability to reflect increasing positive symptomatology that underlies the emerging utility of P300 as a predictive biomarker for imminent prodromal conversion to psychosis . Except for MMSE and UPSA-B, global indices of cognitive ability and real-world functional capacity, no other clinical measures were associated with P300, indicating that the association with positive symptoms is relatively specific. Since these patients were all clinically stable outpatients on stable medication regiments, differences in positive symptomatology presumably reflected relatively stable trait-like differences on this dimension of illness severity. P300 may therefore be an endophenotype that is especially informative regarding the genetic basis of positive symptoms. The associations with MMSE and UPSA-B highlight the utility of the P300 as a sensitive physiological index of differences in brain function, even within a relatively homogeneous clinical sample. The magnitude of the P300 response has long been considered a broad indicator of “cognitive fitness” and, more specifically, of the ability to appropriately process and respond to task-salient environmental inputs – i.e., to correctly detect a signal within noise. It is thought to require intact attentional and working memory capacities ,vertical grow and to reflect complex neural processes of temporal and spatial integration across multiple brain regions . It is not surprising, therefore, that the P300 would correlate with other measures of cognitive and functional capacity. A similar association between P300 amplitude and MMSE has been reported previously in chronic Alzheimer’s disease patients and, acutely, in uremic patients undergoing dialysis , where the two measures showed a correlated improvement, as well, following treatment. There have been no prior studies reporting a relationship between P300 and specific measures of functional capacity, including UPSA-B, either in schizophrenia patients or other clinical samples. However, this association is entirely consistent with the relationship between P300 and cognition.

Prior studies examining the relationship between neurocognitive and functional deficits have routinely found that cognitive ability, specifically working memory, is the strongest predictor of schizophrenia patients’ real-world functional capacity . Indeed, in our own data, we observed a similar robust correlation between MMSE and UPSA-B. These associations support the utility of P300 amplitude as a potential biomarker for predictive risk and treatment studies. However, they also emphasize the relatively non specific nature of the measure. This was evident, as well, in the control sample data. In these otherwise healthy subjects, P300 amplitude was affected by smoking, race and, as one mediator of the race effect, prior history of substance abuse or dependence. Previous studies have shown that nicotine reduces P300 , yet – despite the well-known propensity of schizophrenia patients to smoke – there has been virtually no consideration of the effect of smoking on the auditory P300 in patients. We observed no parallel effect of nicotine in patients, presumably because their ERPs were already suppressed. This is consistent with a recent small study of healthy subjects administered intravenous ketamine. Ketamine induced schizophrenia-like symptoms and attenuated the auditory target P300 response, but this was unaffected by co administration of nicotine vs. placebo . Similarly, reduced P300 has been associated with the use of stimulants , opioids and cannabis . Yet, again, we saw no effect of prior substance use on P300 in the schizophrenia patients. This mirrors what was recently reported in a study examining the effects of cannabis in prodromal subjects considered to be at ultra-high risk for developing psychosis. In this sample, those with a history of cannabis use were indistinguishable from those without. However, among the otherwise-healthy controls, those who used cannabis had reduced P300 responses that were indistinguishable from those of the prodromal sample . The impact of substance abuse on the African-American sample may reflect differences in the specific character and/or quantity of substance use within the different racial groupings, which are not captured by a simple dichotomous categorization. Similarly, the residual effects of race, independent of past substance use, could reflect the impact of other psychosocial stressors in the different racial communities. Unfortunately, we have no objective measures of either of stressful life events or physiological markers of stress to test this hypothesis. The fact that modulating factors such as nicotine and substance abuse can differential affect controls, but not patients, raises an important cautionary note about how to interpret study results, potential false negative findings, and what constitutes the best comparison sample for genetic or biomarker studies. A common recommended strategy is to recruit control subjects who are similar to the clinical sample on various modulating factors and co-morbid conditions. The results of this study would seem to temper that recommendation, at least for P300. It suggests that, in matching the samples, individual and group differences may be attenuated for reasons other than psychosis. Consequently, genetic associations with the endophenotype may be obscured and the ability of the measure to predict transition to psychosis may be weakened. This is an issue that clearly requires careful consideration in future analyses. However, the broad utility of P300 as a robust marker for large multi-site studies is confirmed, along with important associations with both positive symptoms and decreased cognitive and functional capacity. The prevalence of type 2 diabetes mellitus is increasing, and it is projected that in the USA alone, type 2 DM will increase to 48.3 million by 2050.In addition to defects in pancreatic b-cell function and insulin sensitivity, systemic inflammation is thought to be involved in its pathogenesis.Marijuana is the most commonly used illicit drug in the USA and is currently used by 14.4 million Americans.The major psychoactive CB is delta 9-tetrahydrocannabinol whose effect is mediated through the CB1 and the CB2 sub-types of CB receptors found in the brain and lymphoid tissues.The endocannabinoids, a group of neuromodulatory lipids also bind to these receptors.Cannabis, THC and other CBs have been shown to have both beneficial and detrimental effects.Marijuana users have higher caloric intake while eating less nutrient-rich foods,yet have similar or slightly lower body mass index than non users. We hypothesised that the prevalence of DM would be reduced in marijuana users due to the presence of one or more CBs because of their immunomodulatory and anti-inflammatory properties.We assessed the association between DM and marijuana use among adults aged 20e59 years in a national sample of the general population.Data on marijuana use were collected by self-report. Non-marijuana users included never users and those who reported ever having used marijuana, but who had not used marijuana in the past month . We classified participants who reported using marijuana in the past month by frequency of use as either light current users as previously described.The definition of marijuana for purposes of this survey includes ‘hash,’ ‘pot’ or ‘grass’ or any other references to the Cannabis plant.

The fraction of tracer unbound to plasma proteins was determined in triplicate by ultrafiltration

The absence of substance use was determined by self-report and confirmed by urine toxicology and breathalyzer test at screening, and on the days of MR and PET imaging. Participants were asked to abstain from food, nicotine, and caffeinated beverages after midnight on the day prior to the imaging study until after completion of the scan. Blood samples were collected at the time of tracer injection and processed immediately after collection in the laboratory, which is adjacent to the scan room and frozen at −80°Celsius until analyzed, as previously described.OMAR was prepared in high specific activity . The radio tracer was infused over 1 minute through the antecubital vein. The radioactivity concentration in blood from the radial artery was measured continuously using an automated system for the first 7 min after radio tracer administration and manually drawn and counted thereafter. Discrete samples were acquired at selected times and measured on a gamma counter to determine radioactivity concentration in whole blood and plasma. Five discrete blood samples were analyzed for the fraction of unchanged OMAR and its radio metabolites using a column-switching high pressure liquid chromatography method. Listmode emission data were collected for 120 minutes after radiotracer administration using the High Resolution Research Tomograph ,hydroponic stands a dedicated brain PET scanner with spatial resolution better than 3 mm. Head motion was measured using the Polaris Vicra optical tracking system and incorporated into PET image reconstruction with all corrections.

The PET images were registered to subject-specific T1-weighted magnetic resonance images acquired on a 3 Tesla Trio imaging system . Anatomical MR images were in turn nonlinearly registered to an MR template where regions of interest were defined. Regional time activity curves were extracted from the dynamic PET data and analyzed using the multi-linear analysis method with metabolite-corrected arterial input functions and cutoff time t*=30 minutes. The kinetic analysis yielded regional estimates of total volume of distribution , the equilibrium ratio of radioligand concentration in tissue relative to arterial plasm, which is directly proportional to CB1 receptor availability. Seventy-two participants were recruited into the study and 60 completed the protocol. Reasons for exclusion were previous medication exposure and medical reasons that would interfere with correct interpretation of the collected data . Table 1 shows demographic, trauma, and clinical characteristics of the HC, TC, and PTSD groups whose data were used for analyses.OMAR injection parameters, age, sex, education, nature of trauma histories, and body mass index did not differ among the groups; there was a greater proportion of white individuals in the HC than TC and PTSD groups. The PTSD group was significantly more likely than the HC and TC groups to currently smoke cigarettes, and to have a lifetime history of mood or anxiety disorder, and alcohol or drug abuse, but the groups did not differ with respect to lifetime and current alcohol use and nicotine dependence. The PTSD group scored higher on the MADRS and HAM-A relative to both control groups, and on the CAPS relative to the TC group.We found that PTSD is associated with a ubiquitously expressed large magnitude elevation in OMAR VT values, which quantitatively reflects CB1 receptor availability.

Notably, this elevation was found in an amygdala-hippocampal-cortico-striatal neural circuit implicated in PTSD, as well as in brain regions outside this circuit. These results suggest greater brain-wide CB1 receptor availability in individuals with PTSD relative to control participants with and without histories of trauma exposure. Reduced peripheral anandamide levels in PTSD complemented the brain OMAR VT results, suggesting that the elevated CB1 receptor availability in PTSD may result from a combination of both receptor up regulation and low receptor occupancy by anandamide. The lack of displacement of CB1 radioligands by agonists which has been attributed to a large receptor reserve suggests that increased OMAR VT values are explained to the most part by receptor up regulation in response to low anandamide levels rather than low receptor occupancy by anandamide. This idea is substantiated by data showing that CB1 receptor up-regulation in response to low stress-induced synaptic anandamide availability was prevented by enhanced anandamide signaling. OEA levels were higher in HC relative to TC and PTSD participants in the current study, but groups did not differ with respect to 2-AG and PEA levels. Taken together, these data suggest that abnormal CB1 receptor-mediated anandamide signaling is implicated in the etiology of PTSD. The sex-related results of the current study accord with animal data demonstrating sex differences in CB1 receptor regulation, with stress-related up-regulation of CB1 receptors observed predominantly in female animals. We also found abnormally low cortisol levels in trauma survivors, corroborating prior work. Another key contribution of the current study is the finding that collective consideration of all three of the biomarkers examined— OMAR VT, anandamide, and cortisol—was highly accurate in classifying PTSD, with nearly 85% of PTSD cases correctly classified and overall classification accuracy approaching 90%.

Results of this study advance the extant literature in three important ways: they contribute to extant knowledge regarding the etiology of PTSD; they identify candidate biomarkers that may be used to support clinical decision-making regarding diagnostic classification of PTSD; and they provide a promising neurobiological rationale to develop novel, evidence-based pharmacotherapies for PTSD.Our results of reduced peripheral anandamide levels together with a compensatory up regulation of CB1 receptors in PTSD suggest lower anandamide tone in PTSD. Notably, elevated rates of cannabis abuse/dependence among individuals with PTSD have been reported. Such findings substantiate, at least in part, emerging evidence that synthetic cannabinoid receptor agonists50 or plant-derived cannabinoids such as marijuana51 may possess some benefits in individuals with PTSD by helping relieve haunting nightmares and other symptoms of PTSD. However, such data do not allow the conclusion that self medication with cannabis with its primary psychoactive constituent tetrahydrocannabinol should be recommended for the treatment of PTSD, as direct activation of CB1 receptors with plant-derived cannabinoids over an extended period of time leads to down-regulation of CB1 receptors, which may in turn result in a depression-like phenotype in certain individuals and increase risk of addiction. Another important finding in this study is the sex differences in anandamide levels and OMAR VT values in both the HC and PTSD groups. Animal data showing higher CB1 receptor levels in male relative to female animals and receptor fluctuations during the estrous cycle together with changes in affinity of agonist binding highlight the importance of careful considerations of gender and menstrual cycle phase in assessments of CB1 receptor availability in imaging studies. In addition, we believe, that a conclusive interpretation of the CB1 receptor profile in males and females requires a broad and dynamic perspective rather than a single observation in a cross-sectional study with a single time point. Our results are largely in agreement with a previous study that used the CB1 PET tracer MK-9470 to investigate the effects of age and gender. That report found greater plasma parent fraction and higher normalized brain uptake in men,grow table which is consistent with our findings. However, because of the nearly irreversible uptake kinetics of the radio tracer and the lack of significant gender differences in the metabolite-corrected input function in the initial cohort that underwent arterial blood sampling, the MK-9470 study used brain SUV as the final outcome metric of tracer binding. We performed kinetic analysis of OMAR data using metabolite-corrected arterial input functions in all participants. This methodology provided estimates of VT, which – in contrast to SUV, which was greater in men than women – was reduced in men compared to women. Thus, our measurements are compatible with those of the previously reported MK-9470 data and the discrepant interpretations appear to be accounted for by different endpoints centered on our use of arterial input functions in kinetic analyses rather than the simplified outcome of normalized brain concentration.

If, as our results suggest, women show higher CB1 receptor availability than men already under basal, non-stress conditions, then they may be at increased risk for PTSD when exposed to trauma. This finding may thus provide a neurobiological explanation for why women are at greater risk for developing PTSD following exposure to various types of trauma than men even when sexual trauma—which is more common in women—is excluded. To date, drug development in PTSD has been opportunistic, building almost entirely on empirical observations with drugs approved for other indications. The data reported herein are the first of which we are aware of to demonstrate the critical role of CB1 receptors and endocannabinoids in the etiology of PTSD in humans. As such, they provide a foundation upon which to develop and validate informative biomarkers of PTSD vulnerability, as well as to guide the rational development of the next generation of evidence-based treatments for PTSD. Blocking anandamide deactivation or re-uptake, both of which will increase synaptic anandamide availability, may lead to a more circumscribed and beneficial spectrum of biological responses than those produced by direct CB1 receptor activation. This is of particular interest for the development of mechanism-based novel pharmacotherapies for PTSD, as emerging data have revealed that enhanced anandamide signaling can curb the effects of chronic stress, possibly by maintaining normal amygdala function via extinction-driven reductions in fear resulting in improved stress-reactivity in humans. Although researchers in sociology, cultural studies, and anthropology have attempted, for the last 20 years, to re-conceptualize ethnicity within post-modernist thought and debated the usefulness of such concepts as “new ethnicities,” researchers within the field of alcohol and drug use continue to collect data on ethnic groups on an annual basis using previously determined census formulated categories. Researchers use this data to track the extent to which ethnic groups consume drugs and alcohol, exhibit specific alcohol and drug using practices and develop substance use related problems. In so doing, particular ethnic minority or immigrant groups are identified as high risk for developing drug and alcohol problems. In order to monitor the extent to which such risk factors contribute to substance use problems, the continuing collection of data is seen as essential. However, the collection of this epidemiological data, at least within drug and alcohol research, seems to take place with little regard for either contemporary social science debates on ethnicity, or the contemporary on-going debates within social epidemiology on the usefulness of classifying people by race and ethnicity . While the conceptualization of ethnicity and race has evolved over time within the social sciences, “most scholars continue to depend on empirical results produced by scholars who have not seriously questioned racial statistics” . Consequently, much of the existing research in drug and alcohol research remains stuck in discussions about concepts long discarded in mainstream sociology or anthropology, yielding robust empirical data that is arguably based on questionable constructs . Given this background, the aim of this paper is to outline briefly how ethnicity has been operationalized historically and continues to be conceptualized in mainstream epidemiological research on ethnicity and substance use. We will then critically assess this current state of affairs, using recent theorizing within sociology, anthropology, and health studies. In the final section of the paper, we hope to build upon our ”cultural critique” of the field by suggesting a more critical approach to examining ethnicity in relation to drug and alcohol consumption. According to Kertzer & Arel , the development of the nation states in the 19th century went hand in hand with the development of national statistics gathering which was used as a way of categorizing populations and setting boundaries across pre-existing shifting identities. Nation states became more and more interested in representing their population along identity criteria, and the census then arose as the most visible means by which states could depict and even invent collective identities . In this way, previous ambiguous and context-dependent identities were, by the use of the census technology, ‘frozen’ and given political significance. “The use of identity categories in censuses was to create a particular vision of social reality. All people were assigned to a single category and hence conceptualized as sharing a common collective identity” , yet certain groups were assigned a subordinate position. In France, for example, the primary distinction was between those who were part of the nation and those who were foreigners, whereas British, American, and Australian census designers have long been interested in the country of origin of their residents.

Confirmation of such hypothesis would have substantial public health implications

It is particularly compelling that ADGRL3 marker rs4860437, which is a major predictor variable component in the trees for SUD, is in complete LD with ADHD susceptibility markers rs6551665 and rs1947274 in Caucasians, suggesting that the phenotype underpinning SUD is under the pleiotropic effect of ADGRL3 variants. Unfortunately, rs4860437 was not included in the exome chip used to genotype the MTA sample and, therefore, could not be included in the analyses for this sample. Given the limited overlap of markers across datasets and possible stratification differences among study populations, a gene rather than a marker-level approach has been advocated. Adopting such a perspective, our results suggest that genetic variants harbored in the ADGRL3 locus confer susceptibility to SUD in populations from disparate regions of the world. These populations are from three different countries and involve different investigators, diverse inclusion criteria, and different clinical assessments, which suggests that our results may replicate in other settings and are likely to be clinically relevant. Of particular interest is the generalization of our findings to a longitudinal study , where adding genetic information to baseline data predicted the development of SUD at later ages, as determined from information gathered over a period of more than 10 years. Additionally, our results generalized to a sample of patients with severe SUD from Kentucky that were not ascertained on the basis of ADHD diagnosis. The first genome-wide significant ADHD risk loci were published recently. Marker rs4860437 is not represented in this dataset; however, this study was not aimed at identifying loci shared between ADHD and SUD.

In any case,flood and drain tray while genome-wide association studies are a useful tool for discovering novel risk variants—as it involves a hypothesis-free interrogation of the entire genome—the lack of genetic association may be a reflection of the polygenic, multi-factorial nature of ADHD, with both common and rare variants likely contributing small effects to its etiology. In addition, an important factor may be the genetic heterogeneity of ADHD sub-types, which may have different underlying genetic mechanisms. Therefore, genome-wide significance may identify loci with larger genetic effects, while others with smaller effects remain undetected for a given population size. Variation in ADGRL3 has been implicated in ADHD in diverse populations. ADGRL3 is a member of the latrophilin subfamily of G-protein-coupled receptors and is most strongly expressed in brain regions implicated in the neurophysiological basis of ADHD. Mouse and zebra fish knockout models also support ADGRL3 implication in ADHD pathophysiol ogy. More recently, Martinez et al. identified a brain-specific transcriptional enhancer within ADGRL3 that contains an ADHD risk haplotype associated with reduced ADGRL3 mRNA expression in the thalamus. This haplotype was associated not only with ADHD, but also with disruptive behaviors, including SUD. A member of the family of leucine-rich repeat transmem brane proteins has been identified as an endogenous postsynaptic ligand for latrophilins. Interference with this interaction reduces excitatory synapse density in cultured neurons and decreases afferent input strength and dendritic spine number in dentate granule cells, which implicates ADGRL3 and FLRT3 in glutamatergic synapse development. Similarly, convergent evidence from a network analysis of a gene set significantly asso ciated and/or linked to ADHD and SUD revealed path ways involved in axon guidance, regulation of synaptic transmission, and regulation of transmission of nerve impulse.

These data altogether suggest that ADGRL3 may be an important SUD susceptibility gene. Strong evidence from clinical and genetic association studies suggests that genetic factors play a crucial role in shaping the susceptibility to both ADHD and SUD. More strikingly, ADHD treatment has been shown to reduce the risk of SUD. Though the neurobiological basis for this association remains unclear, a variety of causal pathways from ADHD to SUD have been proposed that involve conduct problems. Clinical studies have suggested that the link between SUD and ADHD dis appears after controlling for co-morbid CD. In agreement with these studies, the presence of CD was a major predictor of SUD in the ARPA-based predictive models for SUD in the Paisa and Spanish cohorts . Some researchers implicate genetically mediated personality traits, such as impulsivity and lack of inhibitory control as a link between ADHD and SUD resulting from common neurological substrates. Some investigators have pro posed that patients with ADHD use addictive substances to self-medicate and that the differential response to drugs of abuse and atypical behavioral regulation in response to social cues may fuel substance use. Others suggest that the poor judgment and impulsivity associated with ADHD contribute to the development of substance dependence. Clinical variables from childhood have also been associated with SUD in patients with ADHD, such ADHD sub-type, temper characteristics , sexual abuse, suspension from school, and a family history of ADHD. In summary, our results support a possible functional role for ADGRL3 in modulating drug seeking behavior. Regardless of the type of abused substance, longitudinal studies generally find that the onset of ADHD precedes that of SUD, suggesting that the psychopathology of ADHD is not secondary to SUD in most patients. Accordingly, it is reasonable to consider that timely diagnosis and treatment of ADHD with stimulant medication may reduce the occurrence and/or severity of SUD. Based on the relationship with medication response, we speculate that ADGRL3 variants may underlie a differential genetic susceptibility not only to SUD, but also to the long-term protective effects of medication treatment.

Inasmuch as ADGRL3 participates in synaptic formation and function, its involvement in SUD could be mediated by either influencing brain development or moderating drug-induced changes in synaptic strength. Molecular studies are required to elucidate the pathogenic mechanism associated with ADGRL3 dysfunction in SUD.As of 2016, 43 US states have policies regarding alcohol use during pregnancy. These include mandatory warning signs , giving pregnant women priority for substance abuse treatment , giving pregnant women and women with children priority for substance abuse treatment , requiring reporting for either child welfare purposes or data collection and treatment purposes , limiting criminal prosecution , allowing civil commitment , and defining drinking during pregnancy as child abuse/neglect . Most of these, with the exception of MWS, apply to both alcohol and drug use during pregnancy. Many of these policies have been in effect for decades, some for more than forty years. Policy activity on these topics continues in both state legislatures and in the courts. For example, in 2019, the Michigan legislature is considering adopting a MWS policy for alcohol and the Tennessee legislature is considering re-adopting a law criminalizing drug use during pregnancy. Other states are expanding extant policies to cover new substances, e.g., coinciding with state-level cannabis legalization, a few states have expanded MWS policies to include cannabis. State policies are being challenged in court as well; in December 2018, a legal challenge to the Pennsylvania CACN law as it related to opioid use during pregnancy resulted in the Pennsylvania Supreme Court ruling that behavior while pregnant does not constitute child abuse under state law. While policy activity on this topic continues,hydroponic tables canada a recent study suggests that state legislators typically do not consider research evidence in their policy-making related to alcohol and drug use during pregnancy. Among many reasons for the lack of evidence in public health pol icy-making in general, an especially important issue related to policies regarding substance use during pregnancy is that, until recently, there has been little research examining the impact of these policies on either pregnant women or their infants. Furthermore, most of the previous research about state-level policy impacts has considered each policy in isolation. For example, a few qualitative studies have found that fear of being reported to Child Protective Services is a reason women who use alcohol and/or drugs avoid prenatal care. A previous study on MWS found that MWS may be associated with reductions in very low birthweight, although that study did not control for other policies in effect at the same time and did not account for the month and year the policies went into effect.

While not directly related to MWS, other research has found that the fear of having already irreversibly harmed her baby from substance use is a reason women avoid prenatal care and/or do not reduce or stop their use later in pregnancy. Only one study has comprehensively assessed impacts of these policies, finding that most alcohol/pregnancy policies are not associated with alcohol use during pregnancy, and that those that are associated in different directions. This study also found that most alcohol/ pregnancy policies lead to increases in adverse birth outcomes, perhaps because some also lead to decreases in prenatal care utilization. Regarding birth outcomes, out of eight policies in effect in 2013, six were significantly associated with poorer birth outcomes and/or less prenatal care, and two were not associated with any outcomes. The most consistent effects were found for pregnant women living in states with MWS and CACN policies, which both led to higher odds of low birthweight , preterm birth , no or late prenatal care, and lower odds of normal APGAR scores. For example, living in a state with MWS was related to 7% higher odds of LBW , 4% higher odds of PTB , and 18% lower odds of any prenatal care compared to living in a state without MWS. Living in a state with CACN was related to 6% higher odds of LBW , 9% higher odds of PTB , and 13% lower odds of any prenatal care compared to living in a state without CACN. Together the results suggest that alcohol/pregnancy policies may scare women who drink during pregnancy such that they avoid prenatal care utilization, which may contribute to worse birth outcomes, an explanation consistent with previous qualitative research. Although the magnitudes of the point estimates in this study were small, with statistically significant odds ratios related to LBW and PTB ranging from 1.05–1.11, they are likely to still be meaningful in a large population. Still, the question remains as to what these findings mean from a public health perspective. While harms related to substance use during pregnancy come from the use itself, it also appears that harms also come from policies adopted in response to alcohol and drug use during pregnancy. To assess whether the harms from the policies are significant from a public health perspective and not just statistically significant, it is important to translate odds ratios to units meaningful to policymakers–specifically to numbers of babies affected and to costs. Thus, we extend these earlier findings here by adding additional years of data and estimating the excess numbers of LBW and PTB under each policy for 2015 and their associated additional costs in the first year of life.Vital Statistics data from 1972–2004 are publicly available for download. Limited use and restricted Vital Statistics data from 2005–2015 are available from the CDC’s National Center for Health Statistics. We obtained limited use and restricted use datasets from the CDC for the years 2005–2015. Those datasets no longer include information such as exact dates and are thus anonymized. This research was considered Not Human Subjects Research by the University of California, San Francisco and Public Health Institute institutional review boards. Birth certificate data were obtained from the Vital Statistics System for 155,446,714 single ton live births between 1972–2015; analyses were restricted to singleton births because multiples are known to be at higher risk of LBW and PTB, and to follow methodological criteria from previously published results. These data were combined with alcohol/pregnancy policy data obtained from the National Institute on Alcohol Abuse and Alcoholism’s Alcohol Policy Information System and original legal research, along with other state-level control variables. Through 2015, eight policies targeting alcohol use among pregnant women were in effect in at least one US state: MWS, PTPREG, PTPREGWC, RRCPS, RRDTx, LCP, CC, and CACN. Hospital costs estimates for additional costs due to LBW or PTB in the first year of life come from two primary sources. First, the Healthcare Cost and Utilization Project used hospital discharge data to show that costs for LBW/PTB admissions totaled $5.8 billion in one year, with costs for LBW and very LBW births averaging $20,600 and $52,300 respectively. Second, a study of private health insurance claims data calculated first-year expenditures for PTB infants in 2013.

Detailed clinical and demographic information on this sample has been published else where

State data from Alaska indicate that the proportion of people who have quit smoking among those who have ever smoked is 41% for Alaska Native adults compared to 62% for Alaskan adults of other races/ethnicities . This means that for the Alaska Native community, there are more current than former smokers. Behavioral interventions that are culturally relevant for specific populations and individualized pharmacotherapy approaches are needed. As an example, with funding from the National Heart, Lung, and Blood Institute , our research is testing the efficacy of internet-assisted tobacco cessation counseling in the remote region of Norton Sound with Alaska Native men and women . The treatment includes combination NRT, and we are evaluating the nicotine metabolism ratio in predicting treatment outcome. To promote cessation in groups particularly vulnerable to tobacco use, emerging research has supported the value of targeted communication and regulatory policies such as reducing nicotine levels in cigarettes , discussed next.In 1994, Benowitz and Henning field proposed the idea of federal regulation of the nicotine content of cigarettes to reduce levels over time, resulting in lower intake of nicotine and a lower level ofnico tine dependence . When nicotine levels get very low, cigarettes would be much less addictive. Now, 25 years later,vertical grow the concept of regulating combustible tobacco to very low levels of nicotine con tent is being seriously considered. Very low nicotine content cigarettes are engineered to have reduced yields of nicotine in the tobacco contained in the cigarette rod.

These cigarettes deliver much lower levels of nicotine than earlier cigarettes that were marketed as “light” or “ultralight” but which in practice allowed smokers to obtain levels of nicotine similar to regular “full-flavor” cigarettes through compensation behaviors, such as blocking ventilation holes or inhaling more deeply . Reducing the nicotine content of cigarettes to approximately 0.5 mg per cigarette is believed to render cigarettes minimally addictive and lead to lower levels of consumption, making it easier for smokers to quit . Randomized trials examining the effects of VLNCs have shown reductions in smoking and dependence and increases in quit attempts for VLNCs in comparison with standard nicotine cigarettes. A 6-week trial found decreases in nicotine expo sure and dependence on nicotine for VLNCs, decreases in craving during abstinence from smoking, and decreases in the number of cigarettes smoked without significantly increasing levels of expired carbon monoxide or total puff volume, which suggests minimal compensation behavior . In a randomized, parallel-arm, semi blinded study of adult cigarette smokers, participants receiving 0.05 mg/g cigarettes showed greater relief of withdrawal from usual-brand cigarettes than the nicotine lozenge, significantly higher abstinence at the 6-week follow-up than the 0.3 mg/g cigarette, and a similar rate of cessation as the nicotine lozenge . At 12-month follow-up, however, findings were not sustained . In clinical trials, VLNCs generally have lower acceptability than commercially available cigarettes, and these trials have encountered problems with nonadherence and study dropout rates of 25 to 45% . Combining VLNCs with nicotine patches may aid with the transition to VLNCs and increase compliance, but doing so was not found to improve long-term quit rates. If the nicotine content in all cigarettes was reduced to make them less addictive, either through federal regulation or by the tobacco industry’s own initiative, then problems with adherence and attrition could be less of an issue and long-term cessation rates could be higher.

A series of laboratory and experimental studies have tested VLNCs with smokers, with mental illness and substance use disorders finding VLNCs less satisfying than usual brand cigarettes and leading to reduced smoking while decreasing craving, withdrawal, and depressive symptoms and without leading to compensatory smoking . In one study that found negative cognitive performance associated with VLNCs, use of the nicotine patch reversed the decrements . The findings support FDA-mandated reduction in the nicotine content of cigarettes to a minimally addictive level to reduce cigarette use among smokers with mental illness. The Family Smoking Prevention and Tobacco Control Act bars the FDA from completely removing nicotine from cigarettes. The FDA, however, is allowed to reduce the amount of nicotine in cigarettes to very low levels. In July 2017, the FDA indicated that it would issue an Advance Notice of Proposed Rule making to seek input on the potential public health benefits and any possible adverse effects from lowering the nicotine content of cigarettes . The process of review continues. The WHO emphasizes that a nicotine reduction strategy ought to cover all combustible tobacco products, not just cigarettes; include provision of tobacco cessation treatment; and consider toxicant exposures from switching to noncom bustible forms of tobacco to sustain nicotine intake and for what duration .Substance use disorders and addiction represent a global public health problem of substantial socio economic implications. In 2010, 147.5 million cases of alcohol and drug abuse were reported , and SUD prevalence is expected to increase over time. Genetic factors have been implicated in SUD etiology, with genes involved in the regulation of several neurobiological systems found to be important . However, limitations intrinsic to most genetic epidemiological studies support the search for additional risk genes.

Attention-deficit/hyperactivity disorder , the most common neurodevelopmental behavioral dis order, is frequently co-morbid with disruptive behaviors such as oppositional defiant disorder , conduct disorder , and SUD. The close association between ADHD and disruptive behaviors is summarized by long itudinal observations in ADHD cohorts. Children diagnosed with ADHD monitored during the transition into adolescence exhibit higher rates of alcohol, tobacco, and psychoactive drug use than control groups of children without ADHD. It has been estimated that the life time risk for SUD is ~50% in subjects with childhood ADHD persisting into adulthood. Reciprocally, the prevalence of ADHD is high in adolescents with SUD and the presence of an ADHD diagnosis affects SUD prognosis, with ADHD being associated with both earlier and more frequent alcohol-related relapses and lower likelihood of cannabis-dependence treatment completion. Strong evidence from family, twin, and genome-wide linkage and association studies suggests that genetic factors play a crucial role in shaping the susceptibility to both ADHD and SUD. During the last 15 years, we have collected families clustering individuals affected with ADHD and disruptive behaviors from disparate regions around the world. Although the prevalence of ADHD co-morbid with disruptive behaviors is variable across populations, we found a higher frequency of CD, ODD, and SUD in ADHD individuals than in unaffected relatives. Characterization of the association between ADHD and ADGRL3 has provided key information to better predict the severity of ADHD, the long-term outcome, the pat terns of brain metabolism, and the response to stimulant medication. To the best of our knowledge, ADGRL3 linkage and association results represent some of the most robustly replicated genetic and pharmaco genetic findings in ADHD genetic research. While ADGRL3 has also shown association with disruptive behaviors in the context of ADHD, a direct link to SUD has not been systematically investigated. In this manuscript we tested the hypothesis that ADHD risk variants harbored at the ADGRL3 locus interact with clinical, demographic,trimming tray and environmental variables associated with SUD.This population isolate is unique in that it was used to identify ADHD susceptibility genes by linkage and association strategies.The sample consists of 1176 people , mean age 28 ± 17 years, ascertained from 18 extended multi-generational and 136 nuclear Paisa families inhabiting the Medellin metropolitan area in the State of Antioquia, Colombia. Initial coded pedigrees were obtained through a fixed sampling scheme from a parent or grandparent of an index proband after having collected written informed consent from all subjects or their parent/guardian, as approved by the University of Antioquia and the NIH Ethics Committees, and in accordance with the Helsinki Declaration. Patients were recruited under NHGRI protocol 00-HG-0058 . Exclusion criteria for ADHD participants were IQ < 80, or any autistic or psychotic disorders. Parents underwent a full psychiatric structured interview regarding their offspring . All adult participants were assessed using the Composite International Diagnostic Interview , as well as the Disruptive Behavior Disorders module from the DICA-IV-P modified for ret rospective use. The interview was conducted by a “blind” rater at the Neurosciences Clinic of the University of Antioquia, or during home visits. ADHD status was defined by the best estimate method. specific information regarding clinical diagnoses and co-morbid disruptive disorders, affective disorders, anxiety, and substance use has been published elsewhere.The ADHD sample consisted of 670 adult ADHD patients, mean age 33 ± 10 years, 69% males , recruited and evaluated at the Psychiatry Department of the Hospital Universitari Vall d’Hebron according to DSM-IV TR criteria.

ADHD diagnosis was based on the Spanish version of the Conners Adult ADHD Diagnostic Interview for DSM-IV. Comorbidity was assessed by Structured Clinical Inter view for DSM-IV Axis I and Axis II Disorders . ODD during childhood and adolescence was retrospectively evaluated with the Schedule for Affective Disorders and Schizophrenia for School-Age Children, Present and Lifetime Version . Thirty-nine percent of ADHD patients fulfilled diagnostic criteria for SUD, 21% for disruptive behavior disorders , 21% for depression , 13% for anxiety , and 8% for phobias . The level of impairment was measured with the Clinical Global Impression included in the CAADID Part II and the Sheehan Disability Inventory. Exclusion criteria for ADHD patients were IQ < 80; pervasive developmental disorders; schizophrenia or other psychotic disorders; presence of mood, anxiety or personality disorders that might explain ADHD symptoms; birth weight ≤ 1.5 kg; and other neurological or systemic disorders that might explain ADHD symptoms. The SUD sample consisted of 494 adults recruited and evaluated at the Addiction and Dual Diagnosis Unit of the Psychiatry Department at the Hospital Universitari Vall d’Hebron with the Structured Clinical Interview for DSM IV Axis I Disorders . All patients fulfilled DSM IV criteria for drug dependence beyond nicotine dependence. None were evaluated for ADHD. The control sample consisted of 483 blood donors in which DSM-IV lifetime ADHD symptomatology was excluded under the following criteria: not having been diagnosed with ADHD and answering negatively to the lifetime presence of the following DSM-IV ADHD symptoms: often has trouble keeping attention on tasks, often loses things needed for tasks, often fidgets with hands or feet or squirms in seat, and often gets up from seat when remaining in seat is expected. Individuals affected with SUD were excluded from this sample. None of them had self-administered drugs intravenously. It is important to mention that the exposure criterion was not applied; therefore, this set cannot be classified as “pure” controls. All patients and controls were Spanish of Caucasian descent. This study was approved by the ethics committee of the Hospital Universitari Vall d’Hebron and informed consent was obtained from all subjects in accordance with the Helsinki Declaration.The Multi-modal Treatment Study of Children with ADHD was designed to evaluate the relative efficacy of treatments for childhood ADHD, combined sub type, in a 14-month randomized controlled trial of 579 children assigned to four treatment groups: medication management, behavior modification, their combination, and treatment as usual in community care. After the 14- month treatment-by-protocol phase, the MTA continued as a naturalistic follow-up in which self-selected use of psychoactive medication was monitored. A local normative comparison group of 289 randomly selected classmates group-matched for grade and sex was added when the ADHD participants were between 9–12 years of age. The outcomes in childhood , and adolescence and into adulthood have been reported. Substance use was assessed with a child/adolescent-reported questionnaire adapted for the MTA. The measure included items for lifetime and current use of alcohol, cigarettes, tobacco, cannabis, and other recreational drugs. Also included were items for non-prescribed use or misuse of psychoactive medications, including stimulants. The measure was modeled after similar sub stance use measures in longitudinal or national survey studies of alcohol and other drug use that also rely on confidential youth self-report as the best source of data. A National Institutes of Health Certificate of Confidentiality further strengthened the assurance of privacy. Substance use was coded positive if any of the following behaviors, selected after examining distributions, were endorsed as occurring in the participant’s lifetime up to 8 years post-baseline: alcohol consumption more than five times or drunk at least once; cigarette smoking or tobacco chewing more than a few times; cannabis use more than once; or use of inhalants, hallucinogens, cocaine, or any of amphetamines/stimulants, barbiturates/sedatives, and opioids/narcotics without a prescription or misused a prescription .

Patient education should thus be included in an interdisciplinary pain-management strategy

Antibodies targeting BDNF reduced pain-like behavior in rat and mouse models of neuropathic pain . In rat models of OA, intra-articular BDNF injection exacerbated pain behavior, whereas sequestration of BDNF with TrkB-Fc antibodies reversed pain . These results further indicate the contribution of the BDNF/TrkB pathway in chronic pain and its potential as therapeutic target. Finally, the latest emerging target for pain is the gut microbiota . These microbes may modulate inflammatory response –associated pain both in the PNS and CNS and thus offer numerous therapeutic targets for chronic pain . Therefore, chronic pain management requires multiple treatment targets. Yaksh and colleagues have summarized other potential regimens . Pain management should thus involve a multidisciplinary approach and vision, combining pharmacological therapies with non pharmacological and self-management strategies. Current pharmacological management of chronic pain is mostly symptomatic, not disease-modifying, and shows only limited efficacy and many adverse effects. A common finding is the low effect sizes of all monomodal treatment strategies, irrespective of medical, psychological, or physiotherapeutic approaches. New treatment strategies are urgently needed. At the same time, risk factors for the development of chronic pain are often ignored. In this review, we focused specifically on therapeutic strategies involving neuropeptide mediators of neurogenic inflammation. Research has targeted inhibiting neuropeptides such as CGRP, SP, and NGF or their receptors,drying racks with varying degrees of success. As several molecules come into action during neurogenic inflammation and chronic pain, redundancy in these molecules can limit the action of targeted treatments.

Pharmacological regimens, strategies to modify risk factors, and in vestment in prevention are of paramount importance.In addition, identification of new biomarkers could promote the development of new analgesics . Finally, although most countries offer a limited multimodal and interdisciplinary care for chronic pain, the healthcare system should encourage a holistic and collaborative approach to providing better care to patients suffering from chronic pain. Perseverance in research, education, and advocacy are the main instruments to leverage in improving management for millions of patients with chronic pain. Prior to the start of the 2014 legislative session, there were a number of positive signs related to the health of the state’s economy. Utah typically grows more rapidly than the nation after a recession and that pattern held after the Great Recession . In 2013, U.S. employment grew at 1.6 percent compared to 3.3 percent for Utah . The state’s unemployment rate also improved to 4.8 per cent down from 5.7 percent in 2012 . Additional bright spots for the state included improvements in personal income, growth in construction, and increases in taxable sales. Total personal income in 2013 was estimated to be $105.2 billion. Utah’s estimated 2013 percapita income was $36,308, up 2.5 percent from 2012. Going into 2014, as the economy continued to recovery, personal income was expected to in crease to 5.3 percent . The construction industry also showed continued improvement since the recession. The value of permit-authorized construction was estimated at $4.7 billion in 2014 . Total taxable sales were estimated to be $49.78 billion in 2013. Taxable sales include retail trade, business investment and utilities, and taxable services . Overall, the economy in 2013 had improved and was expected to continue to improve in 2014, all of which had an impact on the revenue estimates that were used during the 2014 session. In 2014 the Utah Legislature implemented a significant process change for budgeting.

They established the first week of the session to be the “Base Budget Week,” during which only appropriation committee meetings would be held. All other committees are now delayed for seven days. During the Base Budget Week, legislators scrutinize the base budgets, while saving the al location of new revenue for later in the session. Establishing the Base Budget Week involved more work for the legislative analyst’s staff up front, but it was valuable for both legislators and the staff through more effective use of time for legislators, and with increased efficiencies for the staff. A time analysis of previous sessions determined that several legislative standing committee meetings were cancelled or the time was not well-used. This year, legislators and drafting attorneys had another week to prepare legislation resulting in more bills done by the time standing committees started meeting. The budget week change also resulted in high engagement from legislators in the budgeting processes as subcommittees led the processes, not just leadership. Another reason for front loading the budget process was to improve efficiencies for staff. The analyst’s office likes to measure outcomes. In reviewing staff time, they found that the analyst’s office gets slammed with fiscal notes around day 20, when legislators have to have bills numbered or abandon them. By moving the base-budgeting to the first week of the session, fiscal analysts weren’t staffing subcommittees when the large number of bills came all at that one time. The office had a goal of 95 percent of fiscal notes on-time and had only achieved that goal in 2013 . In 2014, with the budget schedule change and some new technology, the office was on-time 99.5 percent of the time. The result of the Base Budget week was identifying $70 million in offsets within base budgets.

A few notable adjustments in the 2014 session had an impact on the budgeting process. HB 311 requires the Legislative Fiscal Analyst and Governor’s Office of Management and Budget to produce 15-year trend analyses alongside traditional point-in-time revenue estimates. HJR 11 asks that legislators consider using above trend revenue for one-time costs like buildings and roads, debt reduction, or rainy day deposits. The legislature also recognized unfunded liabilities in two areas and addressed those with legislation. The first is post employment benefits. SB 10 funded a new 401k benefit that ends defined OPEB benefits for state employees. The second area is leave time for state employees. SB 269 requires a new Annual Leave II program for state employees—annual leave costs will be sunk at time of accrual rather than at time of use/separation from employment, addressing an $85 million unfunded liability. Just prior to the 2014 General Session, the legislature held the first of its kind long-term policy and budget planning conference. The 1st Biennial Legislative Policy Summit, hosted by the University of Utah’s David Eccles School of Business, focused on how underlying economic and demographic changes will influence Utah’s future public policy environment. Members from both the house and senate met together for a full day to look beyond the two-year election cycle and talk about infrastructure, economic, and education policy modifications necessary to meet Utah’s changing socioeconomic make-up. Future conferences were assured by the passage of House Joint Resolution 10 “Joint Rules Resolution Regarding a Long-Term Planning Conference,” during the 2014 General Session. Tobacco dependence is particularly concerning in adolescence, when the developing brain is especially vulnerable and dependence symptoms may arise after minimal exposure.Presently, electronic cigarettes , which provide pulmonary nicotine and therefore possess high dependence potential,are the most popular tobacco product among US youth.The occurrence of e-cigarette dependence symptoms and their association with nicotine exposure have been documented; however,cannabis drying foundational evidence on the symptom presentation, prevalence, and subgroups at elevated risk of e-cigarette dependence among youth is lacking, as is information on its association with future e-cigarette use, to our knowledge. The objective of this prospective cohort study was to examine the prevalence and symptom expression of e-cigarette dependence and its association with future e-cigarette use among past year e-cigarette users aged 16 to 18 years in Southern California. Specifically, we examined similarities and differences in prevalence and symptom expression between e-cigarette dependence and combustible cigarette dependence among youth, the prevalence of e-cigarette dependence symptoms stratified by subgroups presumed to be at elevated risk , and associations of baseline e-cigarette dependence with subsequent vaping continuation, frequency, and intensity patterns 6 months later. This study aims to provide foundational descriptive evidence on the expression and progression of e-cigarette dependence as a potential presentation of tobacco use disorder among youth.

Such evidence could help establish whether e-cigarette dependence is a health outcome of e-cigarette use that should be considered in federal regulatory decisions that weigh the relative harms and benefits of e-cigarettes.Data were drawn from the Happiness & Health Study,a prospective cohort study of behavioral health. All ninth-grade students in 10 participating public high schools in Los Angeles County, California, in 2013 were eligible. Semiannual in-classroom assessments were administered from 2013 to 2017. Students who were not in class on survey days completed abbreviated surveys, which excluded dependence measures. Data on e-cigarette dependence were first collected in the fall 12th grade survey in 2016, which was considered baseline; follow-up data were collected in the spring of 12th grade, approximately 6 months later, in 2017. The University of Southern California Institutional Review Board approved this study. Participants provided active assent and a parent or legal guardian provided written or verbal informed consent prior to study enrollment. This study is reported according to Strengthening the Reporting of Observational Studies in Epidemiology reporting guideline.At baseline, tobacco product dependence symptoms were measured using the Hooked on Nicotine Checklist,which was originally developed to measure combustible cigarette dependence and has demonstrated adequate psychometric properties. Students reported whether they had ever experienced each of 10 dependence symptoms for e-cigarettes and combustible cigarettes separately. Electronic cigarette and combustible cigarette dependence items were identically worded, except for substitution of e-cigarette and vaping terms for cigarette and smoking. Endorsing 1 or more symptoms indicated that the participant screened positive for dependence.Students presenting 2 or more or 3 or more total symptoms of e-cigarette or combustible cigarette dependence symptoms were also classified.At baseline, use of e-cigarettes with nicotine and e-cigarettes without nicotine or cannabis oil were measured as yes-or-no questions.Affirmative responses to either or both questions over the past year were used to classify past-year any e-cigarette use , which was necessary for sample inclusion. Past-month vaping and past-year vaping of e-cigarettes that contained nicotine were also assessed. We assessed baseline past-year combustible cigarette smoking using items from previously derived epidemiologic surveys.Vaping continuation was operationalized as any past 6-month use of e-cigarettes at follow-up. Additional survey items assessed past 30-day nicotine vaping frequency and intensity, including number of nicotine vaping sessions per vaping day and puffs per nicotine vaping session.We collected several additional measures to describe the sample and assess risk factors for e-cigarette dependence to be included in an e-cigarette propensity score covariate. These variables may have also influenced e-cigarette use progression patterns and therefore confounded associations between e-cigarette dependence and future use. Participants reported age at e-cigarette use initiation. The following tobacco product use characteristics were measured: past 30-day number of days smoked cigarettes , cigarettes smoked per day on smoking days , and ever use of cigars, hookah, or smokeless tobacco .Participants reported ever use of alcohol, combustible cannabis, vaporized cannabis, or other drugs using questions derived from previously validated items.To assess a potential association of mental health with e-cigarette dependence and future use, we measured symptoms of major depressive disorder, generalized anxiety disorder, social phobia, panic disorder, obsessive-compulsive disorder, manic symptoms, attention-deficit/hyperactivity disorder, and conduct disorder symptoms .Additionally, age, sex , self-reported race/ethnicity and highest level of parental education were surveyed, per past work.After descriptive analyses, we reported prevalence of e-cigarette dependence and prevalence of the 10 specific dependence symptoms among all past-year e-cigarette users and by combustible cigarette use status. Among past-year e-cigarette and combustible cigarette dual users, McNemar tests for within-participant comparisons were used to conduct cross– tobacco product comparisons of e-cigarette dependence, combustible cigarette dependence, and individual symptoms. Prevalence of e-cigarette dependence symptoms was compared across binary sub-classifications of past-year vaping of e-cigarettes with nicotine, past-month vaping , and past-year combustible cigarette and e-cigarette dual use using χ2 tests. For descriptive data, prevalence of meeting 2 or more or 3 or more symptom thresholds were also reported. The prospective association of baseline e-cigarette dependence symptom status with nicotine vaping status at follow-up was tested using binary logistic regression. Prospective associations of baseline e-cigarette dependence symptom status with nicotine vaping frequency and intensity at follow-up were tested using negative binomial regression models. All regressions included baseline status of each respective outcome as a covariate.

Mutant mice that are deficient in CB1 receptors eat less than wild-type controls

Preclinical studies have made a convincing case for the efficacy of cannabinoid agents not only in experimental brain ischemia, but also in models of Parkinson’s disease and other forms of degenerative brain disorders. Also highlighted during the conference were various derivatives of cannabidiol. Particularly interesting in this regard was the compound -7-hydroxy-4#- dimethylheptyl-cannabidiol a hy droxylated, dimethylheptylated cannabidiol, structurally related to HU-210. Like D9 -THC, 7-OH-DMH-CBD is a potent inhibitor of electrically evoked contractions in the mouse vasdeferens. However, 7-OH-DMH-CBD does not significantly bind to either CB1 or CB2 receptors and its inhibitory effects on muscle contractility are not blocked by CB1 or CB2 receptor antagonists, suggesting that the compound may target an as-yet-uncharacterized cannabinoid-like receptor. This hypothesis is reinforced by pharmacological experiments, which suggest that 7-OH-DMH-CBD displays anti-inflammatory and in testinal-relaxing properties, but does not exert overt psychoactive effects in mice. However, the nature of this hypothetical receptor and its relationship to other cannabinoid-like sites in the vasculature and in the brain hippocampus remains to be determined.A large number of pharmaceutical companies have started active CB1 antagonist programs, mostly as a result of the clinical success of SR141716A ,cannabis grow equipment the first CB1 antagonist to be developed. This molecule has successfully completed Phase III studies and is anticipated to become available within a year for the treatment of obesity and tobacco addiction. Rimonabant is an inverse CB1 agonist with a Ki of 11 nM at the CB1 receptors and 1640 nM at CB2. Additional agents currently in development include SLV-326 and LY320135 .

However, all of these compounds are inverse agonists.Examples of this class are the compounds O-2654 and AM5171 . As noted above, therapeutic areas for cannabinoid antagonists include obesity, drug addiction and perhaps CNS disorders.The mechanism by which cannabinoid antagonists exert their anti-obesity effects is still not fully understood.First, there is a loss of appetite.Second, there is an increase in metabolic rate and a loss of fat mass. These effects may be linked, on the one hand, to the ability of rimonabant to affect corticotropin-releasing hormone , as suggested by the fact that CB1 receptors colocalize with CRH receptors in the hypothalamus. This may be significant for explaining the drug’s effects on appetite drive, as it is known that CRH is anorexigenic. On the other hand, mice that lack CB1 receptors display a hyperactivity of the hypothalamic pituitary-adrenal axis, with increases in both ACTH and corticosterone. This phenotype may be important in regard to overall metabolic rate. Another possible mediator of the long-lasting effect on body weight reduction unrelated to altered food intake is the adipocyte, because CB1 receptor activation causes lipogenesis, which is blocked by rimonabant.CB1 cannabinoid receptors are present on the cell surface of neurons within the brain reward circuitry. Furthermore, endocannabinoids may be released from dopamine neurons in the ventral tegmental area , and from medium spiny neurons in the nucleus accumbens of the brain reward circuit. Additionally, endocannabinoids and D9 -THC activate CB1 receptors and by doing so regulate reward strength and drug craving. Though we do not know how this occurs, it is likely that these mechanisms extend to all drugs of abuse, because collectively these drugs show the propensity to increase VTA dopamine neuron activity, which might be coupled to augmented endocannabinoid production from the dopamine neurons themselves.

Finally, cannabinoid receptor antagonists block the effects of endocannabinoids in these reward circuits. Preclinical work shows that priming injections of cannabinoid agonists reinstate heroin-seeking behavior after a prolonged period of abstinence in rats trained to self-administer heroin. The cannabinoid antagonist rimonabant fully prevents heroin-induced reinstatement of heroin-seeking behavior. Additionally, rimonabant significantly attenu ates cannabinoid-induced reinstatement of heroin seeking behavior. All these findings clearly support the hypothesis of a functional interaction between opioid and cannabi noid systems in the neurobiological mechanisms of relapse and might suggest a potential clinical use of cannabinoid antagonists for preventing relapse to heroin abuse. It has also been shown that cannabinoid antagonists can prevent drug reinstatement with co caine, alcohol, and nicotine. Thus, it seems that the future of cannabinoid antagonists in substance abuse treatment is particularly promising, especially in the clinical setting, where poly drug abuse is exceedingly more common than isolated single-drug abuse.The available data suggest that CB1 antagonism produces relatively mild side effects in people. Yet several potential risks were discussed and three in particular received a great deal of attention. First, the possibility of neuropsychiatric sequelae, such as anhedonia and anxi ety: preclinical studies have consistently shown such effects in animals, though they have not yet been observed in the clinic. Second, pain and hyperalgesia, because of the pervasive role played by the endocannabinoid system in the control of pain processing. Last, hypertension, as indicated by the contribution of the endocannabinoids to blood pressure regulation and the pressor effects of rimonabant in animal models of hypertension.The endocannabinoid signaling system differs from classical neurotransmitter systems, picking up where classical neurotransmitters leave off.

That is, the activation of receptors initiates a series of chemical events that leads to the release of endocannabinoids from the postsynaptic spine e the final step of which is the enzymatic production and subsequent release of anandamide and/or 2-AG. Once released, the endocannabinoids are then directed to the presynaptic cell and the CB1 receptor responds by inhibiting further release of that cell’s neurotransmitters. The termination of this cascade is accomplished via a transporter that internalizes the endocannabinoids, after which intracel lular enzymes such as fatty-acid amide hydrolase break them down. There is a general consensus that endocannabinoids are transported into cells via a facilitated diffusion mechanism. This process may differ both kinetically and pharmacologically from cell to cell. In brain neurons, endocannabinoid transport is blocked by certain agents, which include the compounds AM404, OMDM-8 and AM1172 . However, the pharmacological properties of these drugs in vivo are only partially understood. Once inside cells, endocannabinoids are hydrolyzed by three principal enzyme systems. FAAH is a key enzyme of anandamide deactivation in the brain. Potent and selective FAAH inhibitors have been developed and shown to exert profound antianxiety and antihyperten sive effects in animals. The latter effects were discussed at length at the workshop, highlighting the important role of anandamide in two important examples of vascular allostasis e shock and hypertension. In addition to FAAH, another amide hydrolase has been recently characterized, which may participate in the degradation of anandamide and other fatty-acid ethanolamides such as oleoylethanolamine . This amidase prefers acid pH values and has a different tissue distribution than FAAH, being notably high in lung, spleen and inflammatory cells. Inhibitors of this enzyme are being developed. Finally, 2-AG is hydrolyzed by an enzymatic system separate from FAAH, which probably involvesa monoacylglycerol lipase recently cloned from the rat brain. Inhibitors of this enzyme are currently under development.What are the therapeutic advantages and draw backs of using a direct agonist vs. an indirect agonist? Several parallels can be drawn to the well-known SSRIs , which have shown such powerful and useful therapeutic applications in effecting indirect agonism of the serotonergic system. Indeed,mobile grow system there is ample evidence that pharmacological profiles for the indirectly-acting agonists can generally be attributed to enhanced selectivity based on more localized action. A prime reason for favoring the indirect agonism approach is the possibility of obtaining new drugs devoid of the psychoactive effects and perceived abuse potential of directly acting CB1 agonists.

If we accept the postulate of on-demand modulation of endo cannabinoid signaling as contributing to some disease states, we are likely to witness the development of more specific medications acting indirectly such as inhibitors of cannabinoid uptake or breakdown.In addition to producing a well-described series of somatic effects – such as decreased motor activity, increased feeding, and analgesia – CB1 cannabinoid receptors also appear to play important, albeit complex, roles in neuropsychiatric disease. Emerging evidence indicates that modulation of CB1 receptor signaling may be useful for the treatment of several mental disorders, such as depression, anxiety, and addiction. This review will focus on the literature suggesting a role for modulation of endogenous cannabinoid signaling in the treatment of depression. Excellent reviews on the contribution of the endocannabinoids to anxiety and addiction have been recently published.Depression is a psychiatric disorder characterized in humans by the core symptoms of depressed mood and/or loss of pleasure or interest in most activities Other characteristics include, but are not limited to, changes in body weight, sleeping patterns, psychomotor behavior, energy level, and cognitive functioning. The overlap between the physiological functions altered by depression and those affected by cannabinoid receptor signaling is striking, and suggests that activation of this system may have important effects on the regulation of mood disorders. In fact, prolonged cannabis consumption and cannabis withdrawal in people are often associated with depression, but whether marijuana usecontributes to the development of this disorder is still a matter of debate . These considerations have prompted numerous researchers to investigate the endocannabinoid system as it relates to depression and mood disorders. There is now persuasive evidence from several areas of research, outlined in this article, which suggests a role for the endocannabinoid system in the normal regulation of mood, as well as in the pathogenesis and treatment of depression and other stress-related disorders. First of all, studies of both animals and humans suggest that alterations in endocannabinoid signaling may participate in depression-related behaviors. Moreover, direct modulation of cannabinoid CB1 receptor signaling, by natural or synthetic agonists, as well as antagonists, can produce effects on stress-responses and mood related behavior. Finally, several enzymes responsible for the metabolism of endocannabinoids have been identified, leading to the development of drugs that indirectly enhance cannabinoid receptor signaling by blocking endocannabinoid deactivation. These pharmacological tools have substantiated the notion that augmentation of endogenous cannabinoid signaling may promote stress-coping behavior, both under normal and pathophysiological conditions. Together, the evidence indicates that the endogenous cannabinoid system is a modulator of mood states and a promising target for the treatment of stress-related mood disorders such as depression. The best characterized endogenous cannabinoid ligands, arachidonoylethanolamide and sn-2-arachidonoylglycerol are produced in an activity dependent manner and appear to locally modulate synaptic transmission in the nervous system via presynaptic activation of the Gαi/o-protein coupled cannabinoid CB1 receptor. Anandamide and 2-AG also bind to and activate the Gαi/o-protein coupled cannabinoid CB2 receptor, but the possible roles of this receptor in the central nervous system are only beginning to be understood. The pattern of distribution of CB1 receptors is reflective of the proposed roles for this system in the modulation of pain perception, affective states, stress responses, motor activity, and cognitive functioning. CB1 is found at highest concentrations in the hippocampus, basal ganglia, neocortex, cerebellum and anterior olfactory nucleus. Moderate levels of the receptor are also present in the basolateral amygdala, hypothalamus, and midbrain periaqueductal gray. Initially, the CB2 receptor was found to be localized predominantly in peripheral tissues and particularly in immune cells, but recent articles have reported CB2 mRNA expression in the brainstem and CB2 immunohistochemical staining throughout the brain. Unlike many traditional neurotransmitters, the endocannabinoid ligands are lipid-derived amphipathic messengers that are not stored in vesicles. Rather, they appear to be produced from precursor components within the cellular membrane. In the best characterized synthesis pathway, the anandamide precursor, N-arachidonoyl-phosphatidylethanolamine , is formed by an N-acyltransferase -catalyzed transfer of an arachidonic acid moiety from the sn-1 position of phosphatidylcholine to the amine group of phosphatidylethanolamine. NAPEs are then cleaved by a NAPE-specific phosopholipase D , an isoform of which has recently been cloned, to produce anandamide. Alternatively, NAPEs can be hydrolyzed by a phopholipase C enzyme to generate phosphoanandamide, which is then dephosphorylated by a phosphatase, such as the protein tyrosine phosphatase PTPN22, to yield anandamide. The biological deactivation of anandamide is likely a two-step process, whereby the lipid mediator is transported into cells by a presently uncharacterizedentity, and then hydrolyzed by the membrane-bound enzyme fatty-acid amide hydrolase to form ethanolamine and arachidonic acid. Two main biochemical pathways exist, which can potentially generate 2-AG. The 2-AG precursor, 1,2-diacyl-sn-glycerol , can be formed from phosphoinositides such as phosphatidylinositol-4,5-bisphosphate by the action of a PI-specific PLC. Two isoforms, α and β, of the enzyme diacylglycerol lipase have been shown to form 2-AG from DAG .