Monthly Archives: May 2023

Our case series demonstrates some of the severe effects these novel compounds can cause

We categorized deductible limits into three levels , as in prior research and based on the definition of high deductibles and benefit plans available at KPNC during this period. Since deductible limits may change over time, we used the minimum level over each 6- month time window during follow-up. We imputed missing deductible levels during a given 6-month window with the last known value during the follow-up period, and we dropped patients with no known deductible limit during the entire 36 months of follow-up from the analysis . Coverage mechanism included enrollment via the California Exchange vs. other mechanisms . We summarized utilization data into 6-month intervals, and we examined trends in health service utilization over 36 months after patients received an SUD diagnosis with Chi-squared tests using 6-month intervals. Using multi-variable logistic regression, we examined associations between deductible limits, enrollment via the California ACA Exchange, membership duration, and psychiatric comorbidity; and the likelihood of utilizing health services in the 36-month follow-up period, controlling for patient demographic characteristics and chronic medical comorbidity. We also evaluated whether enrollment via the California ACA exchange moderated the associations between deductible limits and the likelihood of utilization by adding interaction terms to the multi-variable models. We estimated the associations with deductible limits for each enrollment mechanism by constructing hypothesis tests and confidence intervals on linear combinations of the regression coefficients from these models. To account for correlation between repeated measures,plant growing stand we used the generalized estimating equations methodology . We censored patients at a given 6-month interval if they were not a member of KPNC during that time. We conducted a sensitivity analysis to determine whether high utilizers leaving the health system influenced the observed pattern of decreased utilization from the 0–6 month to the 6–12 month follow-up periods.

Using Chi-squared tests, we compared utilization during the 0–6 month period between patients who remained in the cohort and patients who disenrolled from KPNC at 6–12 months. We hypothesized that if the censored group had greater utilization than the noncensored group, then there would be evidence of high utilizers leaving the health system. We also conducted Chi-squared tests to determine whether censorship was associated with deductible limits and enrollment mechanisms. This study examined longitudinal patterns of healthcare utilization among SUD patients and their relationships to key aspects of ACA benefit plans, including enrollment mechanisms and deductible levels. We anticipated that the increase in coverage opportunities that the ACA provided would bring high-utilizing patients into health systems, driving up overall use of healthcare. Consistent with prior studies of SUD treatment samples that have found elevated levels of healthcare utilization either immediately before or after starting SUD treatment , results of our longitudinal analysis showed that utilization among people with SUDs was highest immediately after initial SUD diagnosis at KPNC, and declined to a stable level in subsequent years. This suggests that the initial high utilization may be temporary. Our sensitivity analysis suggested that this result was not due to high utilizers leaving the KPNC healthcare system. This overall trend in utilization is a welcome finding, and consistent with the intent of the ACA to increase access to care; however, the subsequent decrease in utilization could also signify that patients are disengaging from treatment. Although we cannot specifically attribute the initial levels of utilization to lack of prior insurance coverage, as we did not have data on prior coverage, we found that individuals with fewer than 6 months of membership before receiving an SUD diagnosis were more likely to utilize primary care and specialty SUD treatment than those who had 6–12 months of membership. This suggests that future healthcare reforms that expand insurance coverage for people with SUDs might also lead to short-term increases in utilization for a range of health services. Deductibles are a key area of health policy interest given the growing number of people enrolling in deductible plans post-ACA. As anticipated, higher deductibles had a generally negative association with utilizing healthcare in this population. We found that patients with high deductibles had lower odds of using primary care, psychiatry, inpatient, and ED services than those without deductibles. Additionally, we found the associations between high deductibles and likelihood of utilizing primary care and psychiatry were strongly negative among ACA Exchange enrollees. Although it is somewhat difficult to gauge the clinical significance of these specific results, the strength of the odds ratios for primary care and psychiatry access gives some indication of the potential impact.

The associations of high deductibles with primary care and psychiatry access is worrying given the extent of medical and psychiatric comorbidities among people with SUDs . Although we found more consistent associations for higher deductibles and less healthcare initiation, it is possible that even a modest deductible could deter patients from seeking treatment . From a public policy and health system perspective, the possibility that deductibles could prevent people with SUDs from accessing any needed medical care is a cause for concern. Consistent with prior findings , our results suggest that high deductibles have the potential to dissuade SUD patients from accessing needed health services, and that those who enroll via the ACA exchange may be more sensitive to them. This could be attributable to greater awareness of coverage terms due to the mandate that exchange websites offer clear, plain-language explanations to compare insurance options . In contrast, high deductibles were associated with a greater relative likelihood of SUD treatment utilization. However, this association existed only among patients who enrolled via mechanisms other than the ACA Exchange. It is possible that individuals with emerging or unrecognized substance use problems may have selected higher deductible plans at enrollment due to either not anticipating use of SUD treatment, which is often moreprice-sensitive relative to other medical care , or not being aware of the implications of deductibles. However, once engaged in treatment, individuals with high deductibles may have been motivated to remain there. A contributing factor could also be that such patients were required to remain in treatment either by employer or court mandates, which are common and are associated with retention . The varying associations between deductibles and different types of health service utilization by enrollment mechanisms highlight the need for future research in this area. Insurance exchanges provide access to tax credits, a broader range of coverage levels, and information to assist in healthcare planning that might be less easily accessible through other sources of coverage, e.g., through employers . In our sample, Exchange enrollment was associated with greater likelihood of remaining a member of KPNC, did not demonstrate an adverse association with routine care, and was associated with lower ED use. However, primary care and psychiatric services use were similar across enrollment types, even within low and high deductible limits.

Prior studies have found that health plans offered through the ACA Exchange are more likely to have narrow behavioral health networks compared to other non-Exchange plans and primary care networks ,plant grow table which raises concerns about treatment access. For this health system, that concern appears unfounded. Psychiatric comorbidity was associated with greater service use of all types. Several prior studies have also found that patients with psychiatric comorbidity use more health services than those with SUD alone . Similar to our results, a recent study based in California found that after controlling for patient-level characteristics, the strongest predictors of frequent ED use post-ACA included having a diagnosis of a psychiatric disorder or an SUD . While the ACA was not expected to alter this general pattern, the inclusion of mental health treatment as an essential benefit was intended to improve availability of care and to contribute to efforts to reduce unnecessary service utilization. Our investigation confirms the ongoing importance post-ACA of psychiatric comorbidity and suggests that future efforts in behavioral health reform must anticipate high demand for healthcare in this vulnerable clinical population. It is also worth noting that nonwhite patients were less likely to initiate SUD and psychiatry treatment. Race/ethnic disparities in access to care are a longstanding concern in the addiction field . Some expected these disparities to be mitigated postACA . Findings on race/ethnic differences are similar to what has been observed in other health systems ; although, few studies have examined associations post-ACA. One prior study among young adults with SUD and psychiatric conditions post-ACA found modest ethnic disparities in lack of coverage between whites and other ethnic groups ; although, another study of young people more broadly found larger gains in coverage among Hispanics and Blacks relative to whites . The race/ethnic disparities in SUD and psychiatry treatment initiation in this cohort, in which overall insurance coverage was not a barrier but specific mechanisms could be, highlight the importance of addressing this complicated challenge to health equity.This study used a large SUD patient cohort enrolled in health coverage post-ACA and included comprehensive data on diagnoses, insurance coverage, and use of care over three years. KPNC data are wellsuited to examine ACA-related changes in health service utilization given the size and diversity of its membership. KPNC’s integrated model is becoming more common as other health plans and federally qualified health centers move toward providing integrated SUD treatment services and using EHRs . However, we should note that is an observational study based on EHR data. As such, we cannot attribute causal relationships to our findings.

However, we have conducted sensitivity analyses to examine the robustness of our findings in the absence of a randomized clinical trial. These analyses supported our initial findings; e.g., indicating that service use decrease over time was not due to high utilizers leaving KPNC. Medicaid expansion has the potential to improve access to SUD treatment , but we were also not able to examine its relationship to services in the current analysis due to collinearity with deductible limits . Our study was set in a single nonprofit healthcare delivery system in Northern California, which enabled us to characterize post-ACA patterns of service utilization in depth but did not allow us to compare populations or implementation across systems . Nevertheless, our findings can inform future work on health reform and policy efforts to improve access to healthcare for similar clinically complex patients in other health systems. The ACA provided a critical opportunity to expand access to SUD treatment as well as other important health services for people with SUDs, yet research as rarely examined implementation and subsequent use of care. This study found that in newly enrolled patients with SUDs, health service utilization peaked in the 6 months following an SUD diagnosis and then decreased to a stable level in years 2–3. Among patients with SUDs, deductible limits were generally associated with less health service utilization, which was more pronounced among Exchange enrollees, while psychiatric comorbidity was associated with more use of services. As modifications to the ACA are considered, it is critical to continue investigating the consequences of health reform policies for people with SUDs, including race/ethnic minorities and those with psychiatric comorbidity.E-cigarette use, or vaping, among adolescents has become a public health concern, with 26.7% of high school seniors reporting past-month vaping in 2018, and 900,000 middle and high school students reporting daily or near-daily use. Adolescents’ use of e-cigarettes is associated with an increased risk of subsequent cigarette initiation and frequent use, an increased risk of nicotine dependence, and exposure to potentially toxic chemicals. Despite harm reduction claims by e-cigarette companies, in cross-sectional studies, e-cigarette use among adolescent and young adult dual users is associated with smoking a greater number of cigarettes per day, more frequent smoking, and fewer attempts to quit smoking. Notably little is known about the stability of adolescents’ use of e-cigarettes over time, such as whether non-daily use progresses into daily use and whether daily use is sustained. The potential for harm from exposure to nicotine and toxicants is likely to be greater with sustained and frequent use over time. Study of longitudinal patterns of adolescent e-cigarette use is needed to model the potential for harm from these products. Furthermore, research is needed to articulate adolescent patterns of dual product use over time and the resulting levels of nicotine and toxicant exposure. It remains unclear, for example, whether dual users succeed in reducing and stopping their cigarette use or whether they continue to dual use over time.

Some psychotropic medications may cause sedation or stimulation and thus will also be explored

A recent study reported that antipsychotic haloperidol was uniquely associated with higher global DNA methylation in patients with schizophrenia, but other antipsychotic drugs were not associated with changes in methylation.A further study observed that antipsychotics may have anti-inflammatory effects.However, as patients with chronic schizophrenia are almost certain to be treated with antipsychotic medications, future studies may only be able to adjust for antipsychotic medications in their analyses rather than eliminate these samples entirely. Our samples were obtained from a brain bank and had limited medication history collected. Although controls did not have schizophrenia, they were not screened for other psychiatric conditions such as depression. Antipsychotic medication use was not screened for among the control subjects, as these medications are not commonly used other than to treat psychosis. Another limitation is cell-type heterogeneity in the frontal cortex brain tissue used in our analysis. Our study did not account for differences in cell type seen in the frontal cortex, and a previous study has shown that the two major cell types, neurons and glia, have different DNA methylation signatures.Statistical methods that estimate brain cell types in gene expression studies and, more recently, in DNA methylation studies could be used in future brain DNA methylation studies. Our data indicate that studies of epigenetic changes in schizophrenia hold promise for the future development of diagnostic and prognostic biomarkers for schizophrenia,plant growing stand as well as therapeutic options that target causative epigenetic alterations. The key is to identify aberrant DNA methylation profiles in a functional tissue and determine if the results can be translated back into a diagnostically feasible tissue such as blood or saliva.

Identifying when DNA methylation changes occur is also important in understanding the origins of schizophrenia. During the critical period of development between pregnancy and birth, altered DNA methylation occurs.43 If gene–environmental factors that affect DNA methylation status can be identified, then the incidence of schizophrenia could possibly be reduced by targeting the environmental triggers. To put together all the pieces of the schizophrenia puzzle, gene–environment interactions, as well as how they influence epigenetics, need to be identified. Over the years, there have been numerous studies on the effects of single nucleotide polymorphisms on mRNA expression in schizophrenia, but very few showing how single-nucleotide polymorphisms affect gene expression through DNA methylation. Investigating the involvement of single-nucleotide polymorphisms and their interaction with the environment, as well as their influence on epigenetics, will benefit our understanding of the pathophysiology of schizophrenia. The identification of enzymes that are capable of mediating DNA demethylation in mammalian cells as targets for therapeutic intervention is an exciting prospect that may hold the key to reversing this debilitating psychiatric illness.48Disturbed sleep is increasingly investigated as one of the most promising modifiable risk markers for psychotic disorders.It is a widely reported symptom that already tends to manifest in individuals at clinical high risk .Clinician- and self-described sleep reports in CHR studies are congruent with data derived from objective measures such as polysomnography,actigraphy,magnetic resonance imaging and sleep electroencephalograms,emphasizing disturbed sleep not only as a prominent phenotype of psychotic illness, but as a potentially important biomarker.

Yet, in the existing literature exploring sleep disturbance prior to overt psychosis onset, several important issues have remained unaddressed. First, while abnormal sleep patterns are known to manifest prior to conversion to psychosis,there is a paucity of evidence regarding the extent to which disturbed sleep independently contributes to conversion risk. Overall, studies have shown that psychosis-risk groups experience a considerable amount of sleep disturbance; however, the two notable attempts to use CHR sleep patterns to predict conversion were limited by the cross-sectional nature of their sleep data and did not find a relationship.Second, although it has been suggested that disturbed sleep is associated with CHR symptoms,the few studies that have explored the specificity of sub-clinical psychotic symptoms most altered by sleep have been inconsistent. For example, in CHR youth,certain actigraphic measures of sleep were associated with positive symptoms but none were associated with negative symptoms, and in another study, the Structured Interview for Psychosis-risk Syndromes sleep disturbance score was associated with the discrete positive attenuated symptoms suspiciousness/persecutory ideas, perceptual abnormalities/hallucinations, and disorganized communication.However, in a third CHR study, several sleep variables assessed by the Pittsburgh Sleep Quality Index were associated with more severe negative symptoms and none with positive symptoms.As such, investigations have been discrepant and it is unclear whether the observed associations remain stable over time. Expanding upon studies that assessed sleep cross-sectionally,here we examine associations between sleep and CHR symptoms at multiple time points in at-risk cases and controls, a design used in only few prior studies.Third, it has not yet been established which sleep characteristics are most implicated in CHR symptomatology.

In contrast to the studies that used non-specific sleep disturbance severity scales, the few studies that have examined the individual components of disturbed sleep in relation to CHR symptom changes revealed more specific relationships.For example, polysomnography measured sleep in CHR individuals showed longer sleep latency and REM-onset latency relative to controls.In another study,decreased bilateral thalamus volume was found in CHR youth when compared to controls, which was associated with greater latency, reduced efficiency, decreased quality, and increased overall sleep dysfunction score on the PSQI. The multidimensional nature of sleep may in part explain the variety of sleep risk factors described in the literature. Still, replication in large samples is required to identify the sleep characteristics most strongly related to clinical symptomatology. Fourth, it is unclear whether sleep affects symptom severity directly, or whether the association is influenced by factors such as cognitive deficits, stress, depression, and use of psychotropic medication—all of which have been associated with sleep disruption as well as with prodromal symptomatology.Cognitive deficits, a key aspect of psychotic disorders,are already evident in the prodromal period and can be exacerbated by sleep difficulties.Stress, which tends to be higher in individuals at CHR compared to controls,negatively impacts sleep quality, while restricted sleep can provoke stress as shown by activity of neuroendocrine stress systems.Depression, prevalent in CHR and in early phases of psychosis,is also closely linked to sleep disturbances such that both insomnia and hypersomnia are common symptoms as well as diagnostic criteria of the disorder.Finally, to ascertain the feasibility of investigating sleep as a target for symptom amelioration, it is critical to determine the direction of the association between sleep and CHR symptoms. Some promising evidence includes bidirectional relationships between poor sleep and paranoia and poor sleep more strongly predicting hallucinations than the other way around in samples with non-affective psychotic disorders and high psychosis proneness.Certain actigraphy-measured circadian disturbances have predicted greater positive symptoms at one-year follow-up in CHR youth,and in a general population study,plant grow table 24 h of sleep deprivation induced psychotic-like experiences and showed a modest association with impaired oculomotor task performance common in schizophrenia.The current analysis expands upon existing work as the first to examine longitudinal bidirectional relationships between discrete sleep characteristics, CHR symptoms, and conversion status in a large sample of CHR participants and non-clinical controls. Data were leveraged from the North American Prodrome Longitudinal Study -3,in which participants were prospectively tracked for two years. Specifically, we assessed: whether baseline sleep patterns predict conversion to psychosis; group differences between converters, CHR non-converters, and controls in the associations between sleep trends and CHR symptoms; specificity of the individual CHR symptom domains affected by sleep disturbance; which particular sleep items are most implicated in CHR symptom changes; cognitive impairment, daily life stressors, depression, and psychotropic medication as potential attenuating factors in the association between sleep and CHR symptoms, and the directionality of associations over time.Data were collected from NAPLS-3, a consortium of nine research programs across North America focusing on youth at CHR for psychosis. All participants were recruited between February 2015 and November 2018 through extensive referral networks at each participating site . As such, clinical participants were help-seeking. Controls did not meet criteria for serious mental illness diagnosis.NAPLS-3 conducted clinical assessments over the course of two years; sleep and symptom data were collected at 2-month intervals between baseline and 8 months.

For more information about methods and study protocol, please see supplementary text and.In this longitudinal study, individuals at CHR for developing psychosis reported ample sleep disturbances over the study period. Although baseline sleep patterns did not predict conversion to psychosis, our findings demonstrate that disturbed sleep is strongly related to increased severity of CHR symptoms over time. This association held in half the sleep characteristics when explored independently. Adjusting for depression attenuated the association between sleep and symptoms considerably. Furthermore, while effect sizes were similar bidirectionally, baseline sleep was a significant predictor of CHR symptoms two months later but not vice versa. These findings advance the current literature in several important ways. First, our findings correspond with previous CHR studies in which sleep difficulties at baseline did not predict conversion. This may suggest that sleep operates as an indicator of conversion only in conjunction with several other important markers. This hypothesis is supported by a predictive risk model of conversion in which “sleep disturbance” was one of the six factors that, jointly, yielded a markedly high positive predictive value. Sleep disturbances may have predictive value when they occur close to time of conversion, since sleep alterations are early indicators of psychosis relapse.Poor sleep may also predict conversion among specific groups, such as those with prolonged sleep disturbance or those who only recently developed sleep problems. The role of disturbed sleep as a potential catalyst for conversion in our study remains speculative because 30% of converters completed only one sleep assessment and 27% transitioned to psychosis 10 months post-baseline, at which point sleep was no longer tracked longitudinally. Future studies would benefit from looking at sleep metrics tracked longitudinally in large samples followed through the time of conversion. Second, our findings reveal long-term and robust correlations between a wide range of sleep disturbances and symptoms across all four CHR domains. Although poor sleep did not predict conversion, its relation with symptom exacerbation is clinically important considering that even non-converting CHR youth often have poor functional outcome and persistent symptoms.Our current findings using self-reports are supported by electrophysiological studies relating REM latency and REM density to positive symptoms in psychotic disorders, reduced delta EEG activity and reduced REM latency to negative symptoms in schizophrenia, and reduced sleep spindle activity to both positive and negative symptoms,although these findings vary.In further concordance with our findings on disorganized and general symptoms, individuals with non-affective psychoses and comorbid sleep disorders had greater cognitive disorganization, depression, anxiety and tension/stress levels than those without the latter morbidity.Likewise, EEG delta activity in early-course non-affective psychoses were inversely related to disorganization syndrome.Although associations between sleep parameters and clinical symptoms vary largely across studies, individuals, and stages of illness,sleep characteristics of psychosis-afflicted individuals unequivocally differ from those who are unaffected, and these differences are present prior to illness onset. Third, symptoms of depression showed a strong attenuating effect, such that relationship strengths between sleep and all CHR symptom domains were reduced by roughly half or more. Disturbed sleep is a symptom of and risk factor for depression,and unipolar depressive disorders are common comorbidities among CHR youth. In 744 CHR individuals, >40% met DSM-IV criteria for current depression and almost 20% for past depression.Similarly, depression mediated the association between sleep and suspiciousness in CHR youth,and consistently partially mediated the association between sleep and psychotic experiences.The attenuating role of depression in the current study served to strengthen existing cross-sectional evidence and demonstrate its validity over time. Lastly, we aimed to address the directionality of the relationship between sleep and CHR symptoms. Between baseline and 2-month follow-up, total sleep score was a significant predictor of total, positive, negative, and general CHR symptoms. While general symptoms were the only domain with a statistically significant bidirectional association with sleep, bidirectional effect sizes, as indicated by regression coefficients , were all nearly equal. These findings suggest that sleep is a driving force in symptom exacerbation, and that promoting healthy sleep may be a useful target for the maintenance of CHR symptoms. Evidence has been accumulating in support of cognitive behavioral therapy for insomnia, which not only reduced insomnia but also improved symptoms of paranoia and hallucinations in a large sample of university students and in a small CHR group.

Terpenes are natural products with diverse functions in different organisms

For instance, HIV-1 Tat protein is produced in infected astrocytes and may be secreted and taken up by neighbouring cells, and has been identified as a potential factor in the pathophysiology of HAND. Expression of the CCL2 gene has been shown to be directly transactivated by HIV-1 Tat protein in human astrocytes. Although we cannot determine from this study whether the CCL2 gene is solely driving the CCL2 gradient or whether the presence of infected cells or HIV- 1 Tat are also contributing to CCL2 expression, the results suggest a link between CCL2 genotype and cognition that warrants further study. Overall, these results underscore the importance of examining intermediate phenotypes as modulating factors that may link host genotype to cognitive outcomes in HIV.These compounds have benefited human society since antiquity, applied as materials , traditional medicines , pharmaceuticals , and cosmetics . The versatile functions and applications of terpenes owe to their diverse chemical structures, with more than 55,000 molecules known . Nature employs a concise paradigm to build such structures, in which isopentenyl pyrophosphate and dimethylallyl pyrophosphate , fivecarbon units derived from the 2-C-methyl-D-erythritol 4-phosphate pathway or mevalonate pathway , are first polymerized head-to-tail into prenyl pyrophosphates, including geranyl , farnesyl , and geranylgeranyl pyrophosphates. Then, the prenyl pyrophosphates are processed, mostly cyclized, by terpene synthases to produce terpene scaffolds . Lastly, the optional tailoring proteins, including P450s and glycosyltransferases , modify the scaffolds to afford the final products. Through this paradigm,greenhouse growing racks structural diversities of the final products are introduced from different numbers of five-carbon units polymerized, varied cyclizations , and optional post modifications.

The final products are usually with multiple of five carbons. Terpenes with non-multiple of five carbons are rare in nature and challenging to sample, even though this chemical space is valuable to explore for novel applications. We reported heterologous expression of the lepidopteran mevalonate pathway, a propionyl-CoA ligase, and terpene cyclases in E. coli to produce several novel sesquiterpene analogs containing 16 carbons . The LMVA pathway produces 6-carbon analogs of IPP and DMAPP, homoIPP and homo-DMAPP , in a reaction sequence highly similar to the canonical MVA pathway . The difference is that the thiolase condenses a propionyl-CoA and an acetyl-CoA making 3-ketovaleryl-CoA in the LMVA pathway instead of the condensing two acetyl-CoAto produce acetoacetyl-CoA in the canonical MVA pathway . The “extra” carbon from the propionyl-CoA is transformed into HIPP/HDMAPP, combined with two 5-carbon isoprenoid precursors to afford the sixteen-carbon products. With the LMVA pathway expressed in E. coli, we established a biosynthesis platform for novel homoterpenes deviating from the “multiple of five carbons” rule. While the previous study successfully produced the final homosesquiterpenes, the details of the LMVA pathway, especially the production of HIPP and HDMAPP in the E. coli host was not assayed. Moreover, the production needs optimization because low C16 terpene titers have hindered us from accumulating and purifying these products for characterization. Here, we investigate the LMVA pathway by introducing a promiscuous phosphatase, NudB , to hydrolyze the terpene precursors to their corresponding alcohols. These alcohols are readily detected by gas chromatography . Using the alcohols as the final product, we were able to engineer and optimize the LMVA pathway to increase the production of HIPP, the direct product of LMVA and the starting substrate for homoterpenes synthesis. Also, the higher 3-methyl-3-buten-1-ol analogs have excellent fuel properties, making them candidates for next-generation bio-fuels.All the plasmids and primers were designed using j5 DNA assembly unless otherwise indicated, and listed in Table S1 and Table S2, respectively. The plasmids are publicly available through the Joint Bio-Energy Institute registry . Primers were purchased from Integrated DNA Technologies .

PCR amplifications were performed on an Applied Biosystems Veriti Thermal Cycler using Phusion™ High-Fidelity DNA Polymerase or PrimeSTAR Max DNA Polymerase . Gene codon optimization was conducted using the IDT Codon Optimization Tool . Gene fragments were synthesized by Integrated DNA Technologies . Plasmid isolation was carried out using QIAprep Spin Miniprep Kit or by the plasmid DNA preparation service provided by Genewiz, South San Francisco, CA. DNA gel extractions were performed using Zymoclean Gel DNA recovery Kit . Sanger sequencing of plasmids and bacterial clones was supplied by Genewiz, South San Francisco, CA. The plasmid sequences were verified using the DIVA sequence validation service performed by the Synthetic Biology Informatics Group of the Joint BioEnergy Institute . DNA gel photos were taken using BioSpectrum Imaging System . The atoB knockout and atoB yqeF double knockout strains were generated using the λ red recombinase protocol previously described . A kanamycin resistance cassette flanked by FLP recognition target sites was amplified from the plasmid pKD13 with primers KO_kan_F and KO_kan_R.The homology regions were designed to knock out the entire gene except for the starting ATG and the last 21 base pairs to avoid disrupting potential downstream ribosomal binding sites. The upstream and downstream homology sequences were combined with the kanamycin cassette by using NEBuilder HiFi DNA Assembly , and a second PCR was done to amplify the three-part fusion DNA sequences using primer pairs atoB_US_F/atoB_DS_R or yqeF_US_F/yqeF_DS_R using the corresponding assembly product. To generate marker-free atoB knockout, E. coli 6C01, the pKD46 plasmid with the λ red recombination genes and a temperature-sensitive replicon was transformed into chemically competent E. coli BL21 cells and selected on carbenicillin LB agar plates at 30oC. Plasmid-containing cells were made electrocompetent, and expression of recombination genes was induced with 0.1% arabinose. ~600 ng of the three-part PCR product was introduced into the cells by electroporation, and positive colonies were selected for on LB kanamycin plates grown at 37oC overnight.

To remove the kanamycin resistance cassette, pCP20 was transformed into the selected kanamycin resistance cells using electroporation and selected on carbenicillin and kanamycin LB agar plates at 30oC. The resulting colonies were streaked on an LB plate without antibiotic and grew overnight at 42oC to remove the cassette. The resulting colonies were streaked on an LB plate without antibiotic, a carbenicillin LB agar plate, and a kanamycin LB agar plate, respectively, to confirm the loss of the kanamycin cassette and pCP20. Finally, the knockout of atoB was confirmed via colony PCR using the primer pair atoB_C_F/atoB_C_R.To generate maker-free atoB yqeF double knockout, E. coli 6C02, the same protocol for atoB knockout was employed. The three-part PCR product was introduced into 6C01. The knockout of yqeF was confirmed via colony PCR using the primer pair yqeF_C_F/yqeF_C_R. Two plasmids bearing the pathway genes were co-transformed into E. coli expression hosts using a room temperature electroporation method,vertical hydroponic garden in which the electrocompetent bacterial cells were prepared at room temperature freshly . The transformed cells were plated on LB agar plates added with carbenicillin and chloramphenicol, and the resulting plates were incubated at 30oC until colonies appeared.The grown cultures were inoculated into 10 mL production media in 25 mL glass tubes, and the cultures were grown at 37oC, 200 rpm until OD600 reached 0.8. After incubating on ice for 10 min, isopropyl-β-D-thiogalactopyranoside and substrates were added to the cultures, respectively. The cultures were incubated at 18oC, 200 rpm for 72 h to produce isoprenol analogs. After the production, a ten-fold dilution of the production broth was subjected to endpoint optical density measurement using the cuvette port of a SpectraMax M2e plate reader . All the production runs were conducted in biological triplicate. GC-MS analysis was performed on an Agilent Intuvo 9000 system equipped with the pneumatic switching device using two tandem DB-WAX columns with a helium flow of 1 mL/min at the first column and 1.2 mL/min at the second column. The inlet temperature, the Intuvo flow path temperature, and the MS transfer line temperature are 250oC. 1 μL sample was injected using splitless mode. The oven starting temperature was 60oC held for 2 min. The temperature increased at 15oC per minute until 120oC. Then the temperature was increased at 30oC per minute until 245oC. After the temperature ramping program, a post-run was conducted at 250oC for 5 min with a helium flow of -2.691 mL/min at the first column and 3.106 mL/min at the second column.

The homosesquiterpene biosynthesis pathway we constructed in E. coli contains two sections to incorporate propionate into C16H26 terpenes. In the first section, the LMVA pathway transforms propionate into HIPP and HDMAPP. In the second section, these C6 building blocks are transformed into terpenes . The production of those C16H26 terpenes was previously confirmed using GC-MS and GC-MS-TOF analysis and 13C labeling experiments. However, several challenges hindered us from purifying the final products for detailed structural characterization via nuclear magnetic resonance : 1) the C16H26 terpenes were produced at low titers, 1.05 mg/L in the best case and 2) the ratio of C16 terpenes to total terpenes was modest, 11.25% in the best scenario. Moreover, this pathway was expected to produce higher terpenes with molecular formulas of C17H28 and C18H30, but we did not detect these terpenes. We suspected the low production and low ratio of the homoterpenes may be ascribed to insufficient production of the C6 isoprenoid precursors, HIPP and HDMAPP. To assess the production of the C6 isoprenoid precursors, we tested the production of their corresponding alcohols in an E. coli strain co-expressing the LMVA pathway and NudB. NudB is a phosphatase and works with another endogenous phosphatase to dephosphorylate C5-C15 prenyl diphosphates into their corresponding alcohols in E. coli . To construct the production stain, we transformed pNudB and pJH10 into E. coli BAP1 . The resulting strain was grown and induced for production, with sodium propionate added at the time of induction. After production, we extracted the production broth using ethyl acetate, and the organic phase was analyzed by GC-FID and GC-MS. In the analysis, GC-FID detected a peak in the production broth, and this peak was absent in the negative control .This peak also has the same retention time as the C6-isoprenol standard . GC-MS indicated the mass spectrum of the new peak is similar to the spectrum of the C6-isoprenol standard, with major peaks at m/z = 67, 55, 82, indicating the new peak has similar electron ionization induced fragmentation to C6-isoprenol . GC analysis confirmed the production of C6-isoprenol by the production strain, suggesting HIPP was produced. To evaluate the production level of HIPP, we quantified the C6-isoprenol at 5.5 mg/L using the C6-isoprenol standard curve . In the GC analysis, we also noticed the production of isoprenol , the hydrolyzed product of IPP, at 4.1 mg/L , indicating the production strain also produces a comparable amount of IPP as HIPP. We reasoned that IPP might come from the E. coli endogenous MEP pathway and the promiscuity of the LMVA pathway, in which the thiolase accepted two molecules of acetyl-CoA. We also attempted to confirm the production of -C6-prenol, the proposed hydrolyzed product of HDMAPP, using a synthesized standard with GC-FID and GC-MS. However, -C6-prenol was not detected in the production broth, suggesting the idi gene, encoding the isopentenyl-diphosphate delta-isomerase, might not function normally to transform HIPP to HDMAPP . The results of these production runs indicated the supplement of the C6 isoprenoid precursors from the LMVA pathway is at a low level, comparable to the supply of the normal C5 isoprenoid precursors. To address the insufficient supplement of HIPP, we revisited the LMVA pathway for C6-isoprenol production in E. coli. Like the natural MVA pathways, the previous work employed a thiolase to condense propionyl-CoA and acetyl-CoA to afford 3-ketovaleryl-CoA . Because the predicted thiolase from Bombyx mori does not express in E. coli, we used PhaA from Acinetobacter strain RA3849. PhaA has been well characterized in PHA/PHB production, in which PhaA accepts propionyl-CoA and acetyl-CoA. However, PhaA also converts two acetyl-CoA to acetoacetylCoA , an intermediate for IPP production via the thiolase LMVA pathway. Moreover, a previous study has shown that PhaA homologs catalyze the degradation reaction better than the condensation reaction , suggesting that the degradation of 3-ketovaleryl-CoA is a favored direction in the PhaA-catalyzed reversible reaction. Hence, we suspected the thiolase catalyzed reaction is the bottleneck in the thiolase LMVA pathway for HIPP production.

Spurious correlations were controlled through first-order differentiation

The popularity of new slang terms in our study may signal a different population and a reemergence of a black market for cocaine as a rebranding of street terminology is being used to disguise discussion of this illegal drug. Finally, our argument about epidemiological ties with cocaine mentions in song lyrics is strengthened by our results that showed that popular music lyrics related to codeine and heroin did not follow the same trends with cocaine mortality. If this trend in cocaine lyrics continues, we estimate that future incidence of cocaine use may increase by close to 40% and mortality by cocaine use will increase by almost 80% for 2020, making it important to monitor this growth in lyrics about cocaine over time. An important limitation with our study is that our results depict associations and cannot confirm causality. Analytical steps were included in the methodology to remove potential spurious correlations but this is not enough to deem our results as causal. Therefore, our findings should be interpreted with caution. Cocaine street price was included in the model to control for economic fluctuations that would impact purchasing behaviors and initiation of drug use.Further tests on the differenced series confirmed auto correlations were no longer present after first-order differentiation. Another limitation of our study is the potential of not having captured all the cocaine slang terminology to identify song lyrics describing cocaine. Although we queried a number of top websites for the most referenced slang terms for cocaine, it is possible that certain ambiguous slang terms were excluded.

Generating a comprehensive list of slang drug terminology is exceedingly difficult because the vocabulary around cocaine evolves to conceal its discussion. However, indoor plant table sensitivity analyses revealed that our results were robust to the removal of ambiguous terms such as “8-ball”. Inclusion of additional terms like “blow”, which was not included in the model because of its possible description of guns rather than drugs, increased the degree of association between mentions in lyrics and cocaine mortality. Because of this, we believe that genuinely uncertain references to cocaine are rare and have limited potential to significantly alter study results. In this study, we were only able to conduct analyses at the yearly level and not more granularly. Data derived from Lyrics.com is only available at the annual level. We explored several song lyrics engines including Spotify, Genius Lyrics, Metro lyrics, and Billboard; however, these other platforms either do not allow their data to be available or it is also at the yearly level. We selected Lyrics.com because of its word query capabilities within lyrics that other song lyric engines do not provide. This restricted us to have 18 effective observations from 2000 to 2017 for song lyrics data. However, in our analysis, another limiting factor was the incidence of cocaine data from the National Survey on Drug Use and Health published by the Substance Abuse and Mental Health Services Administration and cocaine-related deaths were obtained from the Center for Disease Control Multiple Causes of Death WONDER database, which only are available aggregated at the yearly level. Therefore, our analysis still would have been limited to this level of granularity. Additionally, since our definition of cocaine-related mortality includes deaths using the ICD-10 code T40.5, it may include underlying causes of death other than cocaine.

However, other ICD-10 codes in the mortality dataset do not specify the actual drug involved. While this is a limitation in the use of death certificates for identifying cause-of-death, this is a consistently used measure for cocaine-related deaths. Furthermore, the quality of testing for drug overdose has improved over time. Thus the rise in deaths by cocaine may be in part due to the more accurate determination of deaths attributed to cocaine. Lastly, it is possible that mortality by cocaine could occur in the same year of initiation, which was not seen in our results. However, addiction to cocaine is often characterized by repeated use that changes brain and psychological function that promote transitions to problematic patterns of use. Thus, the accumulation of conditional use of cocaine overtime that leads to mortality by cocaine is likely to occur after years of addictive use of cocaine. In conclusion, these associations underscore the importance of monitoring trends in music to understand drug patterns over time. Media are a powerful indicator of social norms and our study offers initial epidemiological evidence that music lyrics about cocaine may provide an early signal to incidence of cocaine use and mortality. Additionally, new slang terminology for drugs in music lyrics could indicate a new generation of cocaine users and a surge in under-detected use. Future studies should conduct a deeper investigation into lyrics about cocaine and other substances, with particular focus on how these messages may shape cultural perceptions and behaviors toward drug use. Given the wide audience of popular music, popular music artists should consider the potential influence of their lyrics on the drug epidemic impacting today’s youth.Craving for substances is considered essential for understanding the pathogenesis and maintenance of addiction, as highlighted by the incentive salience model and for the inclusion of craving as a criterion for substance use disorder in the Diagnostic and Statistical Manual of Mental Disorders and the International Classification of Diseases.

Nicotine craving specifically has been shown to predict lapse to cigarette smoking following cessation and is frequently identified by individuals as an important barrier to quitting and maintaining abstinence. Thus, craving represents a clinically important phenotype of nicotine addiction with great potential for intervention. Accurate assessment of craving is essential for the identification, management, and treatment of nicotine and tobacco product use and the use of other substances. In human laboratory studies, craving for nicotine and other abused substances is commonly measured using the cue-exposure paradigm. The translational value of the cue-exposure paradigm to the naturalistic environment is predicated on the observation that relapse to drug use is often precipitated by exposure to drug-related cues that provoke craving. However, naturalistic cues can be very complex and involve a number of contextual factors that are difficult to replicate in laboratory-based cue-exposure paradigms, limiting their ability to invoke a true craving state. New technologies such as virtual reality afford the opportunity to increase the ecological validity of cue-exposure paradigms through the implementation of interactive and immersive presentations of cues within the typical context of use , greatly enhancing our ability to invoke craving in the laboratory. Studies using VR cue-exposure have found great support for its effectiveness in inducing subjective, and in some cases objective, craving for tobacco, as well as alcohol, cannabis, and methamphetamine . Furthermore, despite decades of research, the field of addiction has yet to establish reliable, objective measures of craving. A number of objective correlates of craving have been investigated, including psychophysiological and neurological measures with varying success. Attentional bias, or the ability of drug cues to capture the attention of the user,hydroponic vertical farming can be conceptualized as a behavioral marker of incentive salience and represents an objectively measurable and clinically important phenomenon for the study of addiction. Attentional bias toward smoking cues has been previously demonstrated among regular tobacco smokers, and importantly, it has been related to the risk of subsequent relapse following smoking cessation. Multiple theoretical models suggest that cue-induced subjective craving and attentional bias reflect closely linked underlying processes. Not surprisingly, measures of attentional bias have been shown to correlate with subjective craving. However, the method of assessment appears to be key—direct measures of attention such as the assessment of eye movement, exhibit larger craving correlations and greater reliability than indirect measures such as reaction time. Assessment within naturalistic settings has also independently improved the reliability and validity of attentional bias measurement; yet, the naturalistic constraints of these methods prohibit advanced clinical application of these paradigms. New technological advances in VR implementation allow for the assessment of eye movement in a noninvasive and cost-effective manner and demonstrate early success in distinguishing smokers and nonsmokers on the basis of eye fixations to smoking cues in a virtual world. Spontaneous eye blink rate represents another, much less studied, potential objective correlate of cue-induced craving. EBR has been closely linked with striatal dopaminergic function and has been advanced as a reliable, more cost-effective, and minimally invasive alternative to positron emission tomography to assess dopaminergic functioning. Dopamine release in the basal ganglia inhibits the spinal trigeminal complex, leading to increased EBRs, as demonstrated in both rat and human trials. In line with this theory, preclinical research has shown that direct dopaminergic agonists and antagonists increase and decrease EBRs, respectively. Furthermore, a PET study in monkeys found a strong positive correlation between EBRs and dopamine or D2 -like receptor availability in the striatum.

Given the observed modulation of striatal dopamine during cue-elicited substance craving, it may be possible to detect NTP cue-induced dopamine changes through EBR measurement. Nonetheless, no studies to date have investigated this hypothesis. Lastly, pupillometry represents an additional potential objective craving correlate. Pupil dilation is an indirect measure of norepinephrine release from the locus coeruleus and is associated with reward processing, including sensitivity to rewards, and engagement of cognitive resources. Pupillary responses also seem to index changes in the allocation of attention and have been advanced as an ideal measure for related constructs that may not pass the threshold for overt behavior or conscious appraisal. To our knowledge, only one study has investigated pupillometry as a measure of response to substance cue-exposure. Kvamme et al found that pupillary bias toward alcohol versus neutral cues, but not subjective craving reports, predicted relapse to alcohol use in a sample of detoxified patients with alcohol dependence, suggesting that cue-induced changes in pupillometry may ultimately serve as a useful biomarker for addiction research and clinical care. This study was intended to outline the methods underlying the development of a novel VR-NTP cue-exposure paradigm with embedded eye-characteristic assessments. Preliminary analyses on a pilot sample of participants are also provided as a proof of concept for the potential utility of this paradigm for the induction of subjective craving in the laboratory, assessment of potential biomarkers of craving , and prediction of NTP use behaviors. The NTP Cue VR paradigm uses a virtual reality environment built using Unity. The HTC Vive Pro Eye VR headset was used to enable VR capabilities and collect eye-related data. HTC’s SRanipal SDK was used in conjunction with Tobii’s Tobii XR SDK to provide access to various data from the eye tracker. Specifically, Tobii XR SDK handled object selections, determining what participants were looking at, with its Gaze-to-Object Mapping algorithm, while the rest of the data were retrieved from the SRanipal SDK. The participants were free to move around and interact with various objects within the VR environment using 2 hand-held Vive controllers. Surveys assessing depressed mood and anxiety were presented at the start of the paradigm and additional surveys assessing subjective craving and scene relevance were presented between scenes within the headset. A VAS survey was chosen as the in-task measurement of subjective craving owing to its high face-validity, ability to capture the dynamic fluctuations in craving, and low burden on participants, especially over frequently repeated assessment. Survey responses were made by adjusting a slide bar using one of the controllers. Participants were instructed to “Just explore everything around you until the scene changes” and “During the task, we will be measuring what you pay attention to, and we will be asking you to rate your craving level between each scene.” Three Active scenes and three Neutral scenes were developed and included in the final paradigm . The Active scenes include NTP-related cues, while in the Neutral scenes, all cues are neutral. Active cues include ashtrays, lighters, JUUL devices, cigarettes , Puffbars, hookahs, as well as the presence of human models engaged in smoking or vaping behaviors. Neutral cues vary depending on the scene context. All cues are interactable such that the participants are able to pick up, throw, and collide the items with other items in the scene. All scenes include the presence of at least one animated human model. Smoke and vapor effects are incorporated with the animated human models in the Active scenes to increase the immersiveness of the experience.

School enrollment characteristics were not related to the presence of marijuana comarketing

In addition, emerging research suggests that adolescents’ exposure to retail marketing is associated with greater curiosity about smoking cigars15 and higher odds of ever smoking blunts.The Table summarizes descriptive statistics for store type and for schools as well as mixed models with these covariates. Nearly half of the LCC retailers near schools were convenience stores with or without gasoline/petrol. Overall, 61.5% of LCC retailers near schools contained at least one type of marijuana co-marketing: 53.2% sold blunt wraps, 27.2% sold cigarillos marketed as blunts and 26.0% sold blunt wraps, blunts or other LCC with a marijuana related “concept” flavor. After adjusting for store type, marijuana co-marketing was more prevalent in school neighborhoods with lower median household income and with a higher proportion of school-age youth.Nearly all LCC retailers sold cigarillos for $1 or less. The largest pack size at that price contained 2 cigarillos on average . The largest packs priced at $1 or less were singles in 10.9% of stores, 2-packs in 46.8%, 3-packs in 19.2%, 4-packs in 5.5%, and 5 or 6 cigarillos in 5.5%. After adjusting for store type, a significantly larger pack size of cigarillos was priced at $1 or less in school neighborhoods with lower median household income and near schools with a lower proportion of Hispanic students .In California,trim tray pollen 79% of licensed tobacco retailers near public schools sold LCCs and approximately 6 in 10 of these LCC retailers sold cigar products labeled as blunts or blunt wraps or sold cigar products with a marijuana-related flavor descriptor. A greater presence of marijuana co-marketing in neighborhoods with a higher proportion of school-age youth and lower median household income raises concerns about how industry marketing tactics may contribute to disparities in LCC use.

The study results also suggest that $1 buys significantly more cigarillos in California school neighborhoods with lower median household income. Policies to establish minimum pack sizes and prices could reduce the widespread availability of cheap cigar products and address disparities in disadvantaged areas.After Boston’s 2012 cigar regulation, the mean price for a grape-flavored cigar was $1.35 higher than in comparison communities.The industry circumvented sales restrictions in some cities by marketing even larger packs of cigarillos at the same low price, and the industry’s tipping point on supersized cigarillo packs for less than $1 is not yet known. The retail availability of 5- and 6-packs of LCCs for less than $1 observed near California schools underscores policy recommendations to establish minimum prices for multi-packs .A novel measure of marijuana co-marketing and a representative sample of retailers near schools are strengths of the current study. A limitation is that the study assessed the presence of marijuana co-marketing, but not the quantity. The protocol likely underestimates the prevalence of marijuana co-marketing near schools because we lacked a comprehensive list of LCC brands and flavor varieties. Indeed, state and local tobacco control policy research and enforcement would be greatly enhanced by access to a comprehensive list of tobacco products from the US Food and Drug Administration, including product name, category, identification number and flavor. Both a routinely updated list and product repository would be useful for tobacco control research, particularly for further identifying how packaging and product design reference marijuana use. This first assessment of marijuana co-marketing focused on brand and flavor names because of their appeal to youth.However, the narrow focus is a limitation that also likely underestimates the prevalence of marijuana co-marketing. Other elements of packaging and product design should be considered in future assessments.

Examples are pack imagery that refers to blunt making, such as the zipper on Splitarillos, as well as re-sealable packaging for cigarillos and blunt wraps, which is convenient for tobacco users who want to store marijuana. Coding for brands that are perforated to facilitate blunt making and marketing that refers to “EZ roll” should also be considered. Future research could assess marijuana co-marketing across a larger scope of tobacco/nicotine products. The same devices can be used for vaping both nicotine and marijuana. Advertising for vaping products also features compatibility with “herbs” and otherwise associates nicotine with words or images that refer to marijuana . Conducted before California legalized recreational marijuana use, the current study represents a baseline for understanding how retail marketing responds to a policy environment where restrictions on marijuana and tobacco are changing, albeit in opposite directions.The prevalence of marijuana co-marketing near schools makes it imperative to understand how tobacco marketing capitalizes on the appeal of marijuana to youth and other priority populations. How marijuana co-marketing contributes to dual and concurrent use of marijuana and tobacco warrants study, particularly for youth and young adults. In previous research, the prevalence of adult marijuana use in 50 California cities was positively correlated with the retail availability of blunts.Whether this is correlated with blunt use by adolescents is not yet known. Consumer perception studies are necessary to assess whether marijuana co-marketing increases the appeal of cigar smoking or contributes to false beliefs about product ingredients. Research is also needed to understand how the tobacco industry exploits opportunities for marijuana co-marketing in response to policies that restrict sales of flavored tobacco products and to policies that legalize recreational marijuana use. Such assessments are essential to understand young people’s use patterns and to inform current policy concerns about how expanding retail environments for recreational marijuana will impact tobacco marketing and use.In the United States, heightened levels of cultural polarization and political partisanship have magnified the role of political identity in shaping social behavior .

Recent work has found political identity to increasingly influence behavior in unprecedented ways, frequently dictating choices of one’s personal and online social networks , whom one would consider dating , and even prompting many Americans to relocate to regions more aligned with their political sympathies . Political identity is shaped by a multiplicity of factors including age, ethnicity, demography, culture, and increasingly, information sources across both traditional and social media. There is mounting evidence that political identity in the United States has impacted safety responses to the COVID-19 pandemic as well as how that risk pertains to their group identity versus individual identity . A Pew Research survey found 35% of Republicans were ”very” or ”somewhat” concerned that they would become infected with COVID-19 . In the same survey, 29% of Republicans said that people in their community should ”always wear a mask” in public. Our research is rooted in the intersection of public health and the vast and growing literature in political identity. This literature includes earlier work , and seminal work by Akerlof and Kranton on the economics of identity,indoor garden table as well as recent work that highlights the the sharpening and polarization of U.S. political identity under the Trump Administration . With respect to the pandemic, Hornsey et al. , for example, study the effect of presidential tweets on vaccine hesitancy in the U.S. while Collins et al. find political identity to exhibit a stronger effect over people’s views of the pandemic than personal impact from COVID-19. In this research, we present empirical estimates showing how political identity has shaped COVID-safety responses during the first year of the pandemic; estimate the health costs of political identity in terms of COVID cases and deaths; and test the extent to which these COVID behavioral responses and outcomes are associated with a political identity specifically tied to support for former president Donald Trump relative to more traditional strains of American conservatism. We merge U.S. county-level socioeconomic, demographic, and political data to estimate the effect of conservative political identity on COVID-safety behaviors, reported COVID cases, and deaths attributed to the virus in the first 20 months of the pandemic. After controlling for a host of county-level characteristics, employment, and demographic variables, we estimate that a 10 percentage point increase in the county popular vote for President Trump during the 2020 election to be associated with a 3.9 percentage point decrease in the number of people stating that they wear masks ”all of the time” in public, a 5.1 percentage point decrease in the COVID vaccination rate, and a 0.23σdecline in a COVID-safety behavior index. Estimates show differences in political identity during the first year of the pandemic significantly related to differences in COVID cases and deaths. We find a 10 percentage point increase in the county Trump vote to be associated with 1,394 additional COVID cases and 27.COVID-related deaths per 100,000 county residents. Moreover, the statistical relationship that we find between decreased mask-wearing and elevated COVID cases from differences in political identity is remarkably close to estimates of the average treatment effects of mask-wearing on symptomatic COVID infection obtained in the most extensive randomized controlled trial on the effects of mask-wearing . We test whether observed differences in COVID-safety behaviors, cases, and deaths can be better explained by either of two strains of traditional conservatism in the United States: American social conservatism index we construct consisting of state-level prevalence of abortion clinics, restrictions on same-sex marriage, and support for prayer in schools and American libertarian conservatism .

We find that these traditional strains of American conservatism have little systematic explanatory power over COVID-safety behaviors, cases and deaths relative to 2020 Trump voter support, which retains very strong significance over these outcomes. Our estimates on the impact of political identity on COVID-safety behaviors, cases, and deaths are robust to inclusions of different sets of control variables, demeaning and interactions of controls to address the potential for fixed-effects bias under heterogeneous effects , regularization of controls through a machine-learning algorithm, the use of Conley spatially correlated errors across states, and Oster bounds tests for endogeneity. Our COVID-19 cases and deaths data span roughly the first twenty months of the U.S. experience of the pandemic through October 2021. County-level cases, deaths and mask wearing data are taken from the New York Times COVID-19 database. Cases and deaths are reported from state and county-level health jurisdictions and generally taken from a person’s residence rather than where a person was tested or died . Mask-wearing data in the database originate from online interviews that were conducted by the global data and survey firm Dynata. The survey consists of 250,000 responses between July 2, 2020 and July 14, 2020, after the politicization of mask-wearing responses to the pandemic had taken root. Each survey participant was asked: ”How often do you wear a mask in public when you expect to be within six feet of another person?” and our data reflect the percent of respondents by county who responded ”all of the time.” We also incorporate GPS location data from a large number of mobile devices collected by the company Safe Graph to calculate the median number of devices that remained ”at home” in each county from March 1, 2020 to February 15, 2021 relative to the median that remained ”at home” during the year 2019. The mobility data provides daily observations for the total percent of devices always at home in a given census block group during the first year of the pandemic, in which citizens in many regions were often requested or required to shelter at home. We first take the median percent of devices at home for each county by day. Based on the daily median, we calculate the median percent of devices at home by month. To get the change in devices between the pandemic and pre-pandemic time, we subtract the median percent of devices at home between the pandemic and pre-pandemic periods by month. We then use this difference to obtain the change in the median percent of devices remaining at home during the pandemic months compared to pre-pandemic 2019. Our county-level vaccination data comes from the Centers for Disease Control along with the CDC’s guidelines that we use to establish our vector of control variables that are associated with heightened levels of risk for COVID infection. This county-level data is taken from the U.S. Census Bureau and includes median age, median income, population density, and percent Latino, African-American, and Asian-American in the county population. We also use the percent of county-level employment in manufacturing, services and retail to control for occupations of essential workers. It is important to control for co-morbidities in our analysis, and to do this we use the University of Wisconsin Population Health Institute county health rankings data, where each county receives a percentile score for baseline health.

Volume-weighted bilateral averages of all VOIs were used for analyses

These findings may suggest that in settings in which time and funding are too limited to perform partner services in all new HIV diagnoses, partner services should be focused on individuals diagnosed with AEH and performed within 30 days of diagnosis. Increased focus of partner services on individuals with AEH in these settings may potentially improve partner services delivery by clinicians and public health departments, identification of HIV-unawares and persons during AEH and identification of a high-risk HIV-uninfected cohort appropriate for prioritized prevention services and PrEP. Taken together, these could translate into a larger impact on HIVepidemic control than partner services has had to date. Modeling studies evaluating the downstream effects of targeted partner services, that is the effects of combined identification and treatment of high-transmission risk persons, PrEP in those found to be HIV-uninfected and also real-time identification of AEH outbreaks are needed. These studies would further elucidate the impact of partner services in persons with AEH on epidemic control.Drug addiction is characterized by persistent drug use despite adverse consequences, perhaps in part because the instant pleasure garnered by using drugs is perceived to outweigh the long-term benefits of sobriety. Consistent with this idea, laboratory studies routinely find that individuals with substance use disorders display greater preference for smaller, more immediately available rewards over larger, delayed alternatives than healthy controls . Moreover,drying rack cannabis research indicates that those who most strongly favor the immediate options on such laboratory-based choice tasks are also most likely to relapse during attempted abstinence .

Nonetheless, few studies have attempted to elucidate the neural mechanisms underlying addicts’ inordinate preference for immediate rewards. Dopamine is heavily implicated in intertemporal choice , and indirect evidence suggests that deficient dopamine D2 /D3 -type receptor-mediated dopaminergic neurotransmission in the striatum may be an important contributing factor to this immediacy bias. Like steep temporal discounting, low striatal D2 /D3 receptor availability is observed among individuals with substance use disorders , and has been linked with an increased likelihood of relapse . Chronic exposure to methamphetamine or cocaine induces persistent reductions in striatal D2 /D3 receptor availability in rats and monkeys , and rats treated chronically with either of these drugs exhibit greater temporal discounting than controls . Humans with attention-deficit hyperactivity disorder or obesity—two other disorders that are associated with low striatal D2 /D3 receptor availability — also display greater temporal discounting than healthy controls . Greater temporal discounting has also been observed among carriers of the A1 allele of the ANKK1 Taq1A polymorphism , a genetic variant associated with low striatal D2 receptor density/binding in humans relative to A2 homozygotes . Although an association between low striatal D2 /D3 receptor availability and steep temporal discounting has been implied, this link has not been directly evaluated. We therefore examined striatal D2 /D3 receptor availability in relation to temporal discounting in research participants who met DSM-IV criteria for MA dependence and a group of healthy controls. MA-dependent individuals were selected as a group for study because case-control studies find that they display deficits in striatal D2 /D3 receptor availability and exaggerated temporal discounting . We hypothesized that striatal D2 / D3 receptor availability would be negatively correlated with discount rates among MA users, and possibly also among controls. Because tobacco use has also been linked with low striatal D2 /D3 receptor availability and steep temporal discounting , the association was explored as well in the control-group smokers.

Because chronic MA abusers also display lower D2 /D3 receptor availability than non-users in extrastriatal brain areas , including several that have been implicated in intertemporal choice , exploratory analyses were performed to investigate whether extrastriatal D2 /D3 receptor availability might also be negatively correlated with discount rate.Procedures were approved by the University of California Los Angeles Office for Protection of Research Subjects. Participants were recruited using the Internet and local newspaper advertisements. All provided written informed consent and underwent eligibility screening using questionnaires, the Structured Clinical Interview for DSM-IV , and a physical examination. Twenty-seven individuals who met criteria for current MA dependence but were not seeking treatment for their addiction and 27 controls completed the study. D2 /D3 receptor availability data from approximately half of the MA users and controls have been reported previously , and smaller subsets were included in other studies from our laboratory regarding D2 /D3 receptor availability . The exclusion criteria were: central nervous system, cardiovascular, pulmonary, hepatic, or systemic disease; HIV seropositive status; pregnancy; lack of English fluency; MRI ineligibility ; current use of psychotropic medications; current Axis I disorder including substance abuse or dependence for any substance other than nicotine . A diagnosis of MA dependence and a positive urine test for MA at intake were required for MA-group participants, who completed the study as inpatients at the UCLA General Clinical Research Center, and were prohibited from using any drugs for 4–7 days before testing. MA users completed the behavioral and imaging measures 2 days apart on average . Controls were studied on a nonresidential basis and completed the measures 22 days apart on average .

All participants were required to provide a urine sample on each test day that was negative for amphetamine, cocaine, MA, benzodiazepines, opioids, and cannabis. Compensation was provided in the form of cash, gift certificates, and vouchers.Delay discounting was assessed with the Monetary-Choice Questionnaire , which presents participants with 27 hypothetical choices between a smaller, immediate monetary amount and a larger, delayed alternative. Most of the participants completed the task using a paper-and-pencil format, but some completed the task on a computer ; the questions were presented in the same sequence, regardless of task format. A logistic regression was performed on the data from each participant, separately, using his/her responses to all 27 choices as the dependent variable, and the natural log of the equivalence k value associated with each test question as the independent variable. This k-equivalence value was the number that would equalize the immediate option with the delayed alternative,commercial greenhouse supplies assuming the hyperbolic discounting function: V = A/, where V represents the perceived value of amount A made available D days in the future . The parameter estimates from the logistic regression were used to calculate the k-equivalence value at which the function intersected 0.5 . This derived k value characterized the individual’s discount rate . Because the MCQ only probes discounting between a minimum k-equivalence of 0.0002 and a maximum of 0.25, these values were designated as the minimum and maximum k values, respectively, that could be assigned.Dopamine D2 /D3 receptor availability was assessed using a Siemens EXACT HR+ positron emission tomography scanner in 3D mode with [18F]fally pride as the radio ligand . Following a 7min transmission scan acquired using a rotating 68Ge/68Ga rod source to measure and correct for attenuation, PET dynamic data acquisition was initiated with a bolus injection of [18F]fally pride . Emission data were acquired in two 80min blocks, separated by a 10–20min break. Raw PET data were corrected for decay, attenuation, and scatter, and then reconstructed using ordered-subsets expectation-maximization , using ECAT v7.2 software . Reconstructed data were combined into 16 images , and the images were motion-corrected using FSL McFLIRT , and co-registered to the individual’s structural MRI scan image using a six-parameter, rigidbody transformation computed with the ART software package . Structural images were magnetization prepared, rapid-acquisition, gradient-echo scans acquired during a separate session using a Siemens Sonata 1.5T MRI scanner. All images were registered to MNI152 space using FSL FLIRT . Volumes of interest were derived from the Harvard-Oxford atlases transformed into individual native space, or defined using FSL FIRST . VOIs of the functional striatal subdivisions were created as described previously . Time-activity data within VOIs were imported into the PMOD 3.2 kinetic modeling analysis program , and time-activity curves were fit using the simplified reference tissue model 2 . The cerebellum was used as the reference region . The rate constant for transfer of the tracer from the reference region to plasma was computed as the volume-weighted average of estimates from receptor-rich regions , calculated using the simplified reference tissue model , as suggested by Ichise et al. . Time-activity curves were re-fit using SRTM2 , with the computed k2 ′ value applied to all brain regions.

Continuous variables were assessed for homogeneity of variance across groups using Levene’s tests. Demographic variables were examined for group differences using two-tailed independent samples t-tests, Mann-Whitney U-tests, or Fisher’s exact tests, as appropriate. Group differences in discount rate and BPND were tested using separate independent samples t-tests, and potential confounding variables were assessed as covariates. As expected, the distribution of discount rates was positively skewed. Because a natural log transform yielded a more normal distribution, ln was used for analyses. The threshold for statistical significance was set at α = 0.05 for all analyses. One tailed p-values are reported for analyses where a specific directional effect was predicted . Exploratory analyses were also carried out to investigate whether discount rate is negatively correlated with BPND in extrastriatal regions. These analyses were restricted to regions that exhibit appreciable [18F]fally pride BPND .In line with previous reports, MA users displayed lower striatal D2 /D3 receptor availability and higher discount rates than controls, on average. As hypothesized, discount rate was significantly negatively correlated with striatal D2 /D3 receptor availability in the combined sample and among MA users alone. Although the slopes of the striatal correlations were not significantly different between controls and MA users, the relationship did not reach statistical significance among controls alone. Exploratory analyses revealed negative relationships between discount rate and D2 /D3 receptor availability in every extrastriatal region examined among MA users, but none retained significance following correction for multiple comparisons. While substantial evidence implicates dopamine as a key determinant of intertemporal choice , this study is the first to link temporal discounting directly with a measure of dopamine signaling capacity. The findings indicate that deficient D2 /D3 receptor availability may contribute to steep temporal discounting among individuals with substance use disorders, attention-deficit hyperactivity disorder, or obesity , and carriers of the A1 allele of the ANKK1 Taq1A polymorphism . This reasoning is supported by reports that rats treated chronically with MA or cocaine display evidence of greater discounting of delayed rewards than saline-treated rats , as both of both of these stimulants induce persistent reductions in striatal D2 /D3 receptor availability in rats and monkeys following chronic exposure. The results are also compatible with the literature concerning the neuroanatomical substrates of intertemporal choice. There was evidence of correlations involving several brain regions that have been implicated by functional neuroimaging and lesion studies as playing a role in selecting between immediate and delayed rewards: e.g. the midbrain, dorsal striatum, globus pallidus, thalamus, amygdala, hippocampus, ACC, and insula . The prefrontal cortex is critically important for the ability to resist temptation for instant gratification in order to achieve long-term goals , and striatal D2 /D3 receptor availability modulates PFC activity when goal-directed choices are made . Moreover, D2 /D3 receptor availability in the putamen has been shown to be negatively correlated with glucose metabolism in the orbitofrontal cortex, which is implicated in delaying gratification , especially among MA users . Choosing a smaller, more immediately available reward over a larger, more delayed alternative can be considered as an impulsive choice. However, while striatal D2 /D3 receptor availability has been shown to be negatively correlated with trait impulsivity among MA users , there was no evidence that BIS-11 total scores were correlated with discount rates in this sample of participants . Still, as expected, total BIS-11 scores were robustly higher among MA users than controls on average in this sample, and were negatively correlated with striatal D2 /D3 receptor availability when controlling for age in the combined sample . This result implies that even though both trait impulsivity and temporal discounting are related to striatal D2 /D3 receptor availability, they represent at least partially separable constructs. One limitation of this study is that [18F]fally pride has comparably high affinity for both D2 and D3 dopamine receptors , particularly as levels of D3 receptors may be higher than once estimated in multiple brain regions, including the striatum .

The dependent variable was the number of ARBs the prior month at each assessment

No student was selected because of current or past alcohol problems, and based on questions extracted from the Semi-Structured Assessment for the Genetics of Alcoholism interview , the protocol excluded nondrinkers, individuals with severe psychiatric diagnoses , students who ever met DSM-IV criteria for dependence on alcohol or illicit drugs , and Asian individuals who became physically ill after one standard drink . Five-hundred subjects were enrolled, half above and half below the median for the number of drinks required for effects the first five times of drinking using the Self Report of the Effects of Alcohol questionnaire. The SRE determines the mean number of standard drinks needed across up to four possible alcohol effects actually experienced early in the drinking career. These included the drinks required to produce any effect, slurred speech, unsteady gait, and unwanted falling asleep. The SRE has Cronbach alphas and repeat reliabilities >.88 . This retrospective self-report measure was used rather than alcohol challenges because the latter are too expensive and time consuming for use in a large population . The overlap between SRE and alcohol challenge-based LRs in predicting heavy drinking and alcohol problems is 60%, and the measures produce similar results when used in different generations of the same families . High and low LR subjects were randomly assigned to three conditions: those who watched 5 videos regarding two different prevention approaches and no-intervention controls. The latter helped control for general-campus-related changes in drinking over time, and, reflecting the emphasis in the larger study on the impact of education groups,pot drying most students were assigned to active education.Analyses began with data transformations based on distributional properties for numbers of ARBs and maximum drinks per occasion using inverse reflected and square root transformations, respectively.

For the first set of analyses, to address Hypotheses 1–3, numbers of ARBs per assessment were compared across ethnic groups overall, ethnic groups among females and males separately, and ethnic groups among subjects with high and low LRs separately. As shown in Table 3 regarding each figure, at each assessment ethnic group differences were evaluated using two one-way ANOVAs, first presenting the F-value for differences in prior month ARBs across ethnic groups without controlling covariates, and again as F values controlling for maximum drinks consumed the prior month and for education group assignment . The final analysis in Table 4 addressed Hypothesis 4 by evaluating main and interaction effects for numbers of ARBs across eight time points using a 3 ethnic groups by 2 sexes by 2 LR categories by 8 time points mixed-design ANOVA that controlled for maximum drinks while using education group as a covariate. For all analyses, missing data were handled through a maximum likelihood procedure . At baseline, the 398 eligible participants were 18-year-old UCSD freshmen, of whom 62% were female with 40% EA, 20% Hispanic, and 41% of Asian descent . Table 2 presents the numbers of subjects across combinations of ethnicities, sex, and LR groups. The SRE values averaged 4 drinks across four possible effects actually experienced the first five times they drank. In the prior month, these students consumed on average 6 maximum and 4 usual drinks per occasion, with 4 drinking occasions per month. At baseline, 21% noted having experienced an ARB in the prior month, with an average of 0.33 such episodes . About 40% had used cannabis the prior month, and 87% were in the active intervention group in the prevention study. Table 3 presents statistical analyses associated with the 3 figures, while Table 4 shows results of a mixed-design ANOVA for the overall statistical analysis regarding Hypothesis 4. These statistical results are mentioned here because they relate to interpretations of Hypotheses. Figure 1 presents the average number of ARBs the month before each assessment for members of the three ethnic groups, with highest rates for EA students, lowest for Asian individuals, and an intermediate rate for Hispanic students. While not shown in the figures, over the 55 weeks 43.0% had at least 1 ARB, including 13.3% with 1, 8.8% with 2, 2.5% each with 4, 5, or 6 ARBs, and 0 to 1.0% each with between 7 and a maximum of 36 ARBs.

As demonstrated by the absence of a significant ethnicity by time interaction in Table 4, general patterns of ups and downs in numbers of ARBs over 55-weeks were similar for the three groups where ARB values tended to diminish between January and March , increase in June in concert with a campus festival known for heavy drinking, decrease again over the summer when most students returned home , and rose after returning to school . However, regarding Figure 1, as shown in Table 3 the ANOVAs carried out at each time point revealed significant differences in the average number of ARBs across groups at every evaluation. After controlling for maximum drinks reported at each assessment and education group assignment, residual statistics for ethnic group differences in ARBs at each time point across EA, Hispanic, and Asian subjects remained significant at Times 2 and 4. The absence of a main effect for ethnicity in the mixed-design ANOVA in Table 4 indicates the possibility that other characteristics may have impacted results. Therefore, Figure 2 presents ARB trajectories for the three ethnic groups for female and male students separately. While not shown, during the 55 weeks, 48.0% of females and 34.7% of males reported at least one ARB, with an average of 0.30 and 0.19 ARBs per assessment, respectively. Ethnic group differences in ARBs were prominent among females, with the highest ARB rates for EA women, the lowest for Asian individuals, and intermediate, but relatively low, rates for Hispanics. Looking at each assessment for females, statistical analyses in Table 3 demonstrate significant ethnic differences in the rates of ARBs for raw ARB numbers at every assessment, which remained significant at times 4, 5 and 8 after controlling for education group and maximum drinks. Differences across ethnic groups were less apparent for males, and the ethnicity by sex by time interaction was significant in Table 4. Also, for between subjects’ analyses where time was collapsed and average scores across the eight time points were used, there was a significant sex main effect, and the ethnicity by sex interaction was a trend . Next, the potential relationship of LR to the ethnic patterns of ARBs over time was evaluated in Figure 3. Here, high and low LR subjects showed the same general pattern of ARBs across time demonstrated in Figure 1,cannabis drying including highest rates of ARBs per assessment for EA, lowest for Asian, and intermediate rates for Hispanic students. However, ethnic group differences in ARBs were more robust for high LR subjects, with Table 3 revealing significant differences in raw ARB numbers across the three ethnic groups at every assessment, which remained significant at Time 7 after controlling for maximum drinks and education group assignment, with a trend at Time 4 .

The only significant differences in raw ARB numbers for low LR subjects were noted at Time 2 and 4, each of which lost significance once residuals were used. In Table 4 the sex by LR by time interaction was significant and the LR group by time interaction for patterns of ARBs was a trend . Thus, the relationship of LR to differences in ARB patterns across ethnic groups was modest, and was most robust when considered in the context of sex effects. Regarding Hypothesis 4, as briefly alluded to above, the overall analysis in Table 4 indicates interactions regarding ARBs among ethnicity, sex, and LR in two 3-way interactions . However, the 4-way interaction was not significant. It is important to note that cross-sectional data in Figures 1 to 3 were analyzed after controlling for the education group in which students participated. Table 4 offers additional information about effects of educational group assignment, which was used as a covariate. Here, the education group by time interaction was a trend , and the education group main effect was significant, with controls having higher ARB frequencies than the active education participants.Alcohol-related blackouts are highly prevalent phenomena associated with potentially severe problems . Recently, the prevalence of ARBs has reached alarming rates, especially in females and individuals with early onset drinking . The UCSD freshmen studied here are no exception to these trends as 43% of these students reported at least 1 ARB during the 55 weeks, including 48% in females and 35% in males. While the risk for these alcohol-related anterograde memory lapses increases with BACs , ARB vulnerabilities were also related to ethnic background, female sex, and levels of response to alcohol. The patterns and interactions among these characteristics are the focus of this paper. The current analyses added potentially useful data to the study of ARBs. The sample is relatively large, and subjects were assessed prospectively eight times over 55 weeks during a life-period likely to involve heavy drinking . Several assessments were scheduled at periods when the rates of ARBs were likely to change, including following a heavy drinking campus festival, summer break, and after returning to school as sophomores.

Data were evaluated while controlling for maximum drinks, thus diminishing the possibility that ARB patterns simply reflected impacts of heavy drinking itself, and after controlling for possible effects of the prevention trial from which the data were extracted. The major questions focused on improving understanding of how ethnicity, sex, and LR related to rates of ARBs. To address Hypothesis 1, evaluations began with documentation of expected ethnic group differences in rates of ARBs across time. Consistent with most prior studies, the highest rates were observed for students of EA origin, the lowest among Asian students, with an intermediate rate for Hispanic individuals. This pattern of the number of ARBs persisted after controlling for maximum drinks and the prevention group in which a person participated in the larger study. While fluctuations in ARBs across the year were fairly similar for the three ethnic groups , rates of ARBs were different across ethnicities. As suggested by several recent papers and predicted in the first part of Hypothesis 2, women had higher ARB rates. However, contrary to the second half of that hypothesis, the relationship of ethnicities to ARBs over time was different in females and males. The expected pattern of highest ARBs in EA students and lowest in Asian individuals was most obvious for females and less prominent for males. The mixed-design ANOVA in Table 4 demonstrated significant sex main effects, as well as ethnicity by sex by time and sex by LR by time interactions. The key role of sex in the rates of ARBs over 55 weeks and the interactions of sex with ethnicity might reflect several mechanisms. First, women develop higher BACs per drink , which may translate into higher risks for ARBs. The differences across ethnicities may be especially strong in women vs. men as Asian and Hispanic women may also have stronger culture-based prohibitions against heavier drinking than seen in EA cultures . Also, while more research is needed, considering recent documentation of potentially genetically-related physiologic characteristics that may relate to the BAC required for ARBs , higher rates of ARBs in EA women might reflect some sex-related biological mechanisms that contribute directly to the ARB risk. The first part of Hypothesis 3 was also supported in that a low LR was related to higher ARB rates in these subjects. However, the data in Figure 3 indicate that the relationships of ethnicity to ARBs differ in high- and low-LR subjects. It is possible that greater differential in ethnicity-related ARB risks might be observed primarily in subjects with higher LRs where drinking quantities are not already elevated by a low sensitivity to alcohol.Finally regarding hypotheses, the prediction that the ethnic group status will interact with sex and LR to predict ARB propensity was partially supported. Table 4 demonstrates significant 3-way interactions for ethnicity by sex by time and sex by LR by time, but the overall 4-way interaction was not significant . Still, the findings underscore the contention that there is more to ARBs than just how much a person drinks, and support the prediction that ethnicity, sex and LR all relate to ARB patterns.

Recreational marijuana sales increased every month through August by an average of $2.8 million

Using survey data, the study estimates about 13 percent of Colorado’s population used marijuana at least once in 2010 or 2011. Approximately nine percent of adults reported consuming cannabis in some form at least once a month. Based on a survey about frequency of use, Light et al. estimate that the total marijuana demand in the state is 121.4 metric tons annually . According to their methodology, individuals that consume cannabis daily constitute about one-fifth of marijuana users, but about two-thirds of total demand. Results from this study, when considered in conjunction with tax data, present a paradox. If the demand for marijuana was previously underestimated, why have tax revenues associated with the new marketplace fallen short of expectations? One of many possible explanations is that many individuals continue to take advantage of the black or medical markets to avoid the taxes associated with higher-priced retail marijuana. Given the novelty of recreational marijuana legalization, substantial uncertainty existed regarding the economic benefits that would ensue. Taxes on recreational marijuana include a base sales tax of 2.9 percent, an additional 10 percent sales tax, and a 15 percent excise tax. By the end of 2014, recreational and medicinal marijuana sales totaled nearly $700 million , with about 45 percent of the total from retail sales. During the first month of recreational sales,pot for growing marijuana few retail outlets were operational, and sales of medical marijuana were more than double the nearly $15 million in recreational marijuana sales.August was the first month that recreational sales exceeded medicinal sales. Recreational sales decreased for the first time in September and held steady at about $31 million over the next few months.

December brought another increase as recreational sales passed $35 million for the first time. Recreational sales in January 2015 came in at nearly $36.5 million, so it remains yet to be seen at what point sales will plateau . Tax revenue associated with the sales has been well below most prior projections. Figure 1 provides monthly tax, licensing, and fee revenue data for both recreational and medical marijuana in 2014. All data are from the Colorado Department of Revenue. As seen in the figure, revenue from medicinal sales remained relatively constant in 2014, ranging from a low of $1.41 million to a high of $1.87 million. Some expected the number of medicinal marijuana patients to decrease with the roll out of recreational marijuana, but the number of residents with a valid red card actually increased by over 4,000 to reach a total of 115,467 . Governor Hicken looper originally estimated the state would receive $134 million in marijuana tax revenue in the 2014–2015 fiscal year. This figure was revised in April to approximately $114 million . When the final tax revenues from 2014 were tabulated, the state had collected $76 million, $63.4 million in taxes and the remaining revenue from licensing and other fees. The state of Washington collected approximately $22.7 million in tax revenues and fees for marijuana sales that began in July . Under the tax structure approved by voters, the first $40 million in marijuana excise tax revenue is mandated to go toward school construction. Revenue beyond this amount is not designated for particular purposes. In 2014, excise tax revenue totaled $13.3 million, which fell below expectations. January 2015 showed signs of greater optimism as $36.4 million in recreational marijuana was sold, resulting in over $2.35 million in excise tax revenue for schools. This is the first month excise tax revenue exceeded $2 million. In January 2014, excise tax revenue totaled just $200,000 . In sum, revenue from legal recreational and medicinal marijuana continues to grow steadily, despite the fact that revenue levels have not approached what many originally projected.

For numerous reasons, the continued unfolding of the recreational marijuana market merits watching. Stricter packaging rules for edible marijuana products were recently adopted. In March 2015, the state had to defend legalized marijuana in federal court for the first time in response to a lawsuit filed by Nebraska and Oklahoma officials. One aspect of the state’s marijuana industry in a nascent phase is marijuana tourism. A recent study estimated that “out-of-state visitors currently represent about 44 percent of metro area retail sales and about 90 percent of retail sales in heavily visited mountain communities. Visitor demand is most prevalent in the state’s mountain counties, where combined medical and retail marijuana sales more than doubled after retail sales were legalized in January, 2014” . In some areas, much of the revenue generated through marijuana sales comes from out-of-state visitors. Regulations initially imposed differentiated sales limits whereby state residents could purchase four times the amount of marijuana per day than an out-of-state visitor. The tiered limits were intended to limit the amount of marijuana transported across state borders. Proposals to equalize these limits would likely spur greater tax revenue collection in certain parts of the state. Thus far, beyond funds earmarked for school construction, marijuana revenue has financed the regulatory apparatus necessary to oversee the state’s growing recreational and medical marijuana marketplace. Marijuana revenue has also been allocated to youth drug prevention, public safety, and public education. State officials have launched several advertising campaigns to inform residents and tourists about marijuana laws and regulations regarding its use. To date, more than $5 million has been spent on these public awareness campaigns with more to come. The amount the state is permitted to spend may depend on whether voters collectively decide to let it keep revenues in excess of TABOR limits.

The low level of response to alcohol is one of several aspects of an alcohol response that have been shown to relate to concurrent or future heavy drinking, alcohol problems, and alcohol use disorders . The low LR was initially shown through alcohol challenges to characterize drinkers at higher risk for AUD where, at similar blood alcohol levels and similar drinking histories, those at higher AUD risk demonstrated lower intensities of reaction to alcohol for subjective feelings of intoxication and levels of alcohol induced increases in body sway . The low LR phenomenon was also demonstrated to relate to higher AUD risk using a retrospective measure of the number of drinks needed across up to four effects . The lower alcohol responses associated with the low LR also relate to biological differences beyond subjective feelings and are genetically influenced . The biological markers associated with the low LR included several aspects of central nervous system functioning such as lower alcohol-related changes in adrenocorticotropic hormone ,drying racks background cortical electroencephalogram measures of alpha rhythm patterns, and event-related potential P3 wave latency . The finding that aspects of the low LR were observed in CNS measures led to a search for brain mechanisms that might contribute to the lower intensity of alcohol response. One series of studies compared functional magnetic resonance imaging blood oxygen-level dependent signal results following alcohol and placebo beverages during cognitive tasks in up to 60 matched pairs of individuals with high and low LRs to alcohol . As summarized in the most recent of those papers , although the LR groups were similar on their performance on an emotional face recognition task, those with low LRs demonstrated lower BOLD responses in processing different types of facial affect relative to a control condition in task-relevant brain regions such as the insula and anterior cingulate. Those results were seen during fMRI sessions with alcohol and during sessions with placebo. Those findings suggest that, compared with matched high LR subjects, individuals with low LR potentially required greater activation in relevant brain regions, including the middle and inferior frontal gyri, cingulate, and insula, to process emotion-laden faces. Subsequent follow-up of these subjects demonstrated that aspects of the fMRI patterns added significantly to the alcohol challenge-based measure of low LR in predicting future heavier drinking and alcohol problems . The authors concluded that those results indicate that the low LR to alcohol phenomenon might relate to problems recognizing more subtle effects of some modest sensory inputs, including modest levels of intoxication. The ability to decode facial affect is important when assessing one’s immediate social environment. This process provides valuable information regarding others’ internal affective state, enabling behavioral adaptation according to others’ thoughts and intentions, steps that facilitate social interactions in daily life. Impairments in decoding basic and complex emotional facial expressions of others have been consistently reported in individuals with AUD, even when they are sober , and AUD is associated with difficulties in emotional processing .

Such deficits might impede emotional self-regulation and social interactions . While there is evidence that individuals with AUD have emotional processing deficits, less is known about whether the observed deficits are sequela of chronic alcohol consumption or if they predate heavy drinking in a manner potentially associated with a LR phenotype. Evidence that such findings might predate heavy drinking come from reports of lower activation in middle temporal and inferior frontal gyri in response to an emotion-based psychological test , and attenuated amygdala activation to faces expressing fear in young individuals with AUD relatives . In addition, youth at high risk for substance use disorders have been found to exhibit greater activation in medial prefrontal, precuneus, and occipital cortices on an angry/anxious facial emotion recognition matching task compared with low risk, family history negative individuals . Taken together, these differences suggest impaired affective processing in non-heavy drinking individuals at high familial risk for AUD. The amygdala is a central structure in the limbic system that is associated with affective, stress, and reward processing in coordination with the prefrontal cortex . The emotional face matching paradigm of Hariri et al. is a widely used neuroimaging procedure designed to activate the amygdala. The original version was limited to negative emotions like fear and anger, and modifications of the paradigm have been developed that include positive emotional stimuli that can be used in neuroimaging experiments of reward . Positive emotions activate additional brain regions including the ventrolateral and medial PFC, the insula, and inferior frontal gyrus . Overall, these findings support the need to examine different emotional stimuli separately because different brain regions might react differently to negative versus positive emotions. Recent fMRI studies have begun to examine functional integration, or connectivity, between brain regions by using statistical techniques that examine relationships among two or more regions represented by distinct fMRI BOLD time-series analyses . Positive functional connectivity between regions is thought to reflect patterns of synchronous activity or increased communication. One might hypothesize that differences in connectivity between brain regions might be related to the LR group differences reported in the Paulus et al. paper, as connectivity analyses better capture underlying functional brain networks as opposed to isolated regional brain activation. In the case of the functional networks involved in recognition of emotions expressed in pictures of faces, it is likely that the connections between the cortex and subcortical limbic regions, including the amygdala, are critically important.There is a growing literature documenting disrupted PFCamygdala connections of the fronto-limbic pathways in response to alcohol . Specifically, Gorka et al. examined functional connectivity between the PFC and amygdala in a two-session , double-blind, within subjects cross-over pilot fMRI study of 12 heavy social drinkers during the same emotional processing task used by Paulus et al. . These authors reported that during the processing of a broad range of emotional stimuli including angry, fearful, and happy faces, alcohol significantly reduced functional coupling between the amygdala and orbital frontal cortex differentially depending on the facial emotion presented . Another study examined resting-state functional connectivity in 83 non-AUD alcohol drinkers and found a positive association between increased alcohol misuse scores with decreased amygdaladorsal anterior cingulate cortex connectivity . These data indicate that the underlying processing of emotional signals required to detect a potential threat in the environment and make an appropriate response are altered while intoxicated. However, neither of these protocols included a measure of the LR to alcohol, a phenotype associated with future heavy drinking and alcohol problems. This study presents the results of secondary analyses of amygdala-based functional networks using data from the fMRI data set reported in Paulus et al. .

Living together is a covariate influencing microbial populations in humans

Analyses included 366 MZ pairs, 386 DZ pairs, and 37,832 unrelated pairs obtained by using age and DNA collection year matched non-cotwin pairs from the twin sets. β-diversity measures between groups were compared via the Wilcoxon-Mann-Whitney test . P values were calculated similarly to as previously described. In short, the pair labels were permuted 10,000 times and the W test statistic collected from each permutation. The P value was then calculated by dividing the number of W test statistics greater than the observed W test statistic plus 1 by the number of permutations plus 1. Biplot analyses were used as implemented in QIIME . In experiments where cohabitation was required, only cotwins 18 and under and those over 18 who identified themselves as cohabitating were included, which removed 328 subjects from the total twin sample who were living separate from their cotwin. This population of 588 twins pairs is referred to as the “cohabitation sample.” Cohen’s D effect size for β-diversity measurements was calculated using the R package ‘effectsize’.Microbial traits included taxonomic groups, OTUs, α -diversity measurements, and principal coordinates from β-diversity measurements , collapsing all perfectly correlated traits. Microbial traits were then processed within each population separately: twin pairs, European unrelated , and Admixture American unrelated . Traits were transformed to z-scores and then categorized as either continuous or categorical . Shapiro Wilk test was performed use the R packaged ‘stats’. Categorical traits were then binned based upon z-score transformation on all non-zero values : zero counts, less than or equal to −1, greater than −1 and equal or less than 0, greater than 0 and less than or equal to 1, greater than 1). Some traits failed to categorize due to lack of variation,rolling flood tables resulting in the final trait counts: twins , EUR unrelated , ADM unrelated . Only the continuous traits were used in the EUR and ADM populations so data is provided only for those traits.

Descriptions of all traits can be found in Additional file 1: Tables S11–14.We performed an analysis of 752 twin pairs from the Colorado Twin Registry to estimate host genetic and environmental contributions to salivary micro-biome composition. The sample included 366 monozygoticpairs , 263 same sex, and 123 opposite sex dizygotic pairs that ranged from 11 to 24 years of age. Taxonomic analyses using sequencing of variable region IV of the 16S rRNA amplicon prepared from the saliva of each twin was carried out using QIIME on high-quality Illumina MiSeq paired end reads as previously reported. We determined phyla abundances to be Firmicutes , Proteobacteria , Bacteriodites , Actinobacteria , and Fusobacteria from the 2664 operational taxonomic units found, which is consistent with the “core” salivary micro-biome we and others have previously reported. All of our analyses included only OTUs that were present in at least 2 subjects and observed at least 10 times in total after rarefying at 2500 reads. This filtering yielded 895 OTUs that were considered for all subsequent experiments. Measurements comparing mean β-diversity among MZ, DZ and unrelated individuals allows for assessment of microbial population differences between groups. With either Bray-Curtis or Weighted UniFrac measures of β-diversity among MZ twin pairs were significantly more similar to each other than DZ twin pairs, and for all 3 β-diversity measurements MZ and DZ twin pairs were significantly more similar to each other than to unrelated individuals . This analysis was also carried out with abundant OTUs and all OTUs with very similar results . Rarefaction at 2500 reads produced consistent results across all rarefactions , so for subsequent analyses, one rarefaction to 2500 reads is shown. We could detect no significant effect on any β-diversity measure due to sex when comparing same sex vs opposite sex dizygotic twin pairs perhaps because the sample size did provide enough power to differentiate sex effects from inter individual variation . In subsequent DZ analyses therefore, opposite sex pairs were included. The Colorado Twin Registry includes highly detailed phenotypic information that is invaluable in identifying and controlling for environmental confounders that may play an important role.It is well-known that MZs tend to cohabitate longer than DZs and indeed our previous work has shown that shared environment influences the oral micro-biome.

Therefore, it was possible that the tendency of MZ cotwins to live together longer could be driving the observed heritability. To examine this potential confounder, we reanalyzed the data in Fig. 1a based on questionnaire data from the sample in which we restricted the analysis to only cohabitating pairs . While ideally we would have also analyzed only twin pairs living apart, our sample size did not permit it. As seen in Fig. 1b, MZs remained significantly more similar to each other than DZ twin pairs for the Bray-Curtis and Weighted UniFrac measurement, and was also observed in the abundant and unfiltered/ unrarefied OTU tables described above . We conclude that cohabitation does not play a significant role in the observed micro-biome heritability. To quantify the differences between groups the Cohen’s D effect size was calculated for all β-diversity measurements for both the full sample and the sample limited to twin pairs who were cohabitating . Comparisons between the unrelated and twin pairs yielded medium to large effect sizes. All other comparisons were either small or negligible, the largest of which being between MZ and DZ pairs for Bray Curtis. To quantify the effect cohabitation had on β-diversity measurements the effect size between all twin pairs and just pairs living together were compared for all measurements yielding only negligible effect sizes consistent with a conclusion that cohabitation was not driving observed heritability. The stability of the oral micro-biome over time in adults is reported to be remarkably high relative to that of other body sites. To confirm and extend this observation, we assessed the stability of the oral micro-biome in longitudinal samples from our cohort for 111 individuals, 2–7 years apart . The mean β-diversity measurements between longitudinal samples were compared to the mean of unrelated individuals of different ages. For all three β-diversity measurements examined subjects were significantly more similar to themselves than were unrelated individuals . Intra class correlation coefficients are useful for estimating heritability of individual observations within a group of related observations ; the higher the ICC values for MZ pairs compared to DZ pairs, the greater the heritability. As shown in Fig. 2, ICC values for essentially all abundant taxa are significantly greater in MZ than DZ pairs.

No significant difference was observed between the same sex and opposite sex DZ pairs across the taxa analyzed. The set of taxa analyzed were those that were categorized as continuous . Significance was established with Wilcoxon Signed Rank tests strongly supporting the heritability of taxon abundance in this twin set. We also tested 4 different alpha diversity measures ,flood and drain tray the first 3 principal coordinates for three different β-diversity measurements and saw that most traits were consistent with the conclusion that MZ cotwins are more similar than DZ cotwins. A complete list of the 41 phenotypes tested and their ICC values can be found in Additional file 1: Tables S4 and S11.Twin modeling approaches are used to estimate the amount of variance attributable to additive genetics , common environment or dominance , and unique environment. An ACE or ADE model was constructed for each of 946 traits including alpha diversity, principal coordinates of β-diversity of taxonomic groups, and individual OTUs. A complete list of the A, C/D, and E values for each of these phenotypes can be found in Additional file 1: Table S5. A power analysis shows that our sample is well powered to model continuous traits but is under powered for categorical traits . Traits that were not categorized as continuous were treated as categorical traits . Therefore, while still of interest, the categorical traits should be viewed with lower confidence . In the twin models both C and D cannot be modeled at the same time since each captures the same variance, but the genetic contribution can be compared between phenotypes modeled with ACE or ADE models. Of the 946 traits 55% were modeled as ACE and 44% ADE. Averaging heritability estimates for traits within each phenotype category described above a trend that PCos of measurements have the highest mean heritability estimates emerged for either the full sample or to just twin pairs that are cohabitating . The most heritable were OTU4483015 that corresponds to an unnamed species of Granulicatella and PCo 2 for Bray-Curtis . To better understand which taxa were driving this PCo a QIIME biplot analysis identified the genus Streptococcus as the most abundant taxon on the first 3 principal coordinates from Bray-Curtis . Repeating the ACE models excluding twin pairs who reported that they had moved out after age 18 did not greatly alter the heritability estimates or other components of the model . The unique environment accounted for most of the variation of the traits tested in both the full and cohabitation sample . Little change in the common environment was observed between the full and cohabitation sample analyses . We compared phenotypes deemed to be heritable in our study with phenotypes seen to be heritable in 5 studies of gut and 1 in dental plaque. We found that 14 of the 44 traits were mentioned with heritability estimates of at least 1% in one or another study, though none showed high statistical significance . This is consistent with the possibility that genes that may drive the heritability in the salivary micro-biome may also have more general influences in other human niches.It is assumed that host genes interacting with the oral micro-biome are responsible for the observed heritability.

The best way to identify them is by the analysis of an association between genetic variation and traits. The power to detect this is a function of the number of individuals, the number of tests and the number and types of SNPs available. The greatest power to uncover association given a fixed sample size is obtained by analyzing a limited number of phenotypes based on prior information rather than repeatedly testing multiple hypotheses on the same data. To limit hypotheses to test we focused on the traits found most heritable in twin studies. Traits found to be most heritable are expected to produce the best results in a genome-wide association study. DNA was previously prepared from saliva and blood of 1480 individuals unrelated to the twins and to each other. Human DNA from this sample was subjected to Affymetrix Chip-based genotype analysis that resulted in 696,388 validated human SNP genotypes per individual. The age of subjects ranged from 11 to 33 years and 29% were female. Ancestry was assigned by weighting a subset of the genotyped SNPs against the 1000 genomes dataset and assigning individuals to ancestry group using principal coordinate analysis plots. The genotyped SNPs were then quality filtered and submitted to the Michigan Imputation Server for phasing and imputation . After quality filtering this produced 6,862,363 European and 8,172,048 American Admixed imputed variants respectively that were used in all subsequent analyses. Imputed SNPs from two different randomly selected chromosomal areas in 68 individuals were resequenced with Sanger sequencing to validate imputation. We found that 65/68 imputed calls validated completely with 3 apparently incorrectly imputed . We conclude that imputation provides significantly greater resolution to SNP-based maps at little cost to accuracy. The salivary micro-biome of the 1480 individuals was characterized by 16S RNA sequencing identifying 2679 OTUs, where again as in the twin study, the most prevalent phyla were Firmicutes , Proteobacteria , Bacteriodetes , Actinobacteria , and Fusobacteria . Filtering by prevalence and abundance as described above produced a total of 931 OTUs used for our studies. The SNP-based heritability of micro-biome phenotypes in the unrelated population was assessed using Genome Complex Trait Analysis that estimates the amount of phenotypic variance that can be explained by SNP-based composite genetic variance. To avoid false positives, the genetic relationship matrix was limited to subjects that were estimated to have IBD < 0.025. The first 10 ancestry principal components from LD-pruned SNPs were included to control for population stratification . Given the relatively small sample size, single trait heritability estimates were not evaluated but rather gross trends were observed across all continuous traits.

Our studies suggest that cannabinoid receptor activation impacts on all of these

Despite the presence of mRNA, standard flow cytometry failed to detect CB1 or CB2 receptor protein on the cell surface of monocytes even though antibodies were directed against their N-terminal epitopes. However, when cells were fixed and permeabilized, specific staining for both CB1 and CB2 was detected, consistent with the presence of intracellular protein . Intracellular background staining with isotype control mAb was minimal for CB1 but dimly-positive for CB2, likely reflecting the need for APC-labeled goat anti-mouse F2 as a secondary detection reagent. Due to these differences in fluorescent labels and staining protocols, the relative fluorescent intensity for CB1 and CB2 cannot be directly compared as measures of receptor concentration. The presence of functional CB2 receptor complexes was then assessed by measuring the impact of different cannabi-noids on forskolin-induced generation of cAMP . Using CHO-CB2 cells as a model, we confirmed that treatment with THC significantly inhibited the generation of cAMP, as did JWH-015 at p<0.01. Furthermore, the inhibition of cAMP by THC was blocked by pretreatment with SR144528, a selective CB2 receptor antagonist . The same assays were repeated using purified human monocytes . Again, an overall CB2 agonist treatment effect was present. Pretreatment with either THC or JWH-015 inhibited the forskolin-induced generation of cAMP and the effects of THC were blocked by SR144528 . While monocytes express both CB1 and CB2,vertical grow rack the predominance of CB2 mRNA and the response of these cells to CB2-selective agents suggest that CB2 acts as the dominant cannabinoid signaling pathway. The differentiation of human monocytes into DC is associated with characteristic changes in cell surface proteins involved in antigen presentation . To evaluate the effects of THC on this aspect of differentiation, adherent PBMC were cultured for 7 days with GM-CSF and IL-4 and examined for the expression of typical monocyte and DC markers by flow cytometry .

Exposure to THC did not prevent the normal down-regulation of CD14, but did inhibit the upregulation of other cell surface markers characteristic of antigen presenting cells including CD11c, HLA-DR, CD40 and CD86. The effects were concentration-dependent, with 0.5 μg/ml THC inhibiting expression of all of these markers by 40–60%. Interestingly, the response profiles were not uniform for every protein. THC produced a uniform decrease in the expression of CD11c and CD40 on all of the cells but resulted in two distinct subsets with respect to the expression of HLA-DR and CD86 – one population that did not express these markers and one that expressed relatively normal levels . In the latter case, the relative proportions of these two subsets depended upon the concentration of THC, with higher levels of THC resulting in fewer marker-positive cells. Cannabinoids have been reported to promote the apoptosis of mouse bone marrow-derived DC under certain conditions . In order to assure that the phenotypic changes observed in our studies were not the result of poor viability, DC that had been differentiated in the presence of either THC or JWH-015 were stained with propidium iodide and Annexin-V-FITC. There was no significant impact of either cannabinoid on the number of recovered cells or on the frequency of apoptotic or dead cells .In addition to their high level expression of major histo-compatibility complex and costimulatory molecules, monocyte-derived DC are usually characterized by their capacity for antigen uptake, as well as their secretion of cytokines that promote cell mediated immunity. Receptor-mediated endocytosis was measured by the uptake of FITC-dextran and was dramatically suppressed in cells that had been exposed to THC . The production of IL-10 and IL-12 was also assessed by stimulating cells with SAC and measuring cytokines released into the culture media at 48 h following stimulation. Interestingly, while the production of IL-12 was significantly suppressed , the secretion of IL-10, which can bias T cell activation toward T helper type 2 and/or T regulatory phenotypes, was not altered .

This differential effect on cytokine production is consistent with an immuno regulatory effect rather than a global suppression of DC function.A number of factors can help restore function to impaired antigen presenting cells or enhance their capacity to stimulate T cell responses. Given our findings with THC-DC, we hypothesized that a combination of DC activation and cytokine replacement might be effective for this purpose. In initial experiments, DC and THC-DC were exposed to heat killed and fixed SAC for 18–24 h prior to co-culture with T cells. The goal was to replicate bacterial activation signals that might occur during an immune challenge in vivo. In other experiments, the co-cultures were supplemented with IL-7, IL-12 or IL-15 to replace key cytokines known to be involved in the proliferation and differentiation of effector/memory T cells. As demonstrated in Fig. 6, pre-treating control DC with SAC enhanced their capacity to stimulate T cell proliferation and maturation. In addition, exposing THC-DC to SAC restored some of their capacity to generate mature responder T cells. This effect correlated with the upregulation of HLA-DR, CD80 and CD86 on THC-DC . In addition, supplementing the co-cultures with IL-7 helped SAC-stimulated DC to further promote the expansion and phenotypic maturation of effector T cells, a synergistic effect that was not observed with either IL-12 or IL-15. When assessed in a limited number of experiments, IL-7 also increased the production of IFN-γ and TNF-α, consistent with a restoration of their effector/memory function . The human CB2 receptor was first cloned from a human myeloid cell line and has been reported as the predominant cannabinoid receptor subtype expressed by immune cells . Consistent with this, we found that expression of CB2 mRNA predominated over CB1 when fresh human monocytes were purified and assayed by semi-quantitative RT-PCR techniques. However, neither cannabinoid receptor could be detected on the extracellular surface of monocytes when stained with mAbs known to be specific for their N-terminal sequences.

We recently reported that CB2 may exist as an intracellular protein in immune cells and others have suggested that CB1 may also function as an intracellular receptor . Consistent with these observations, the addition of an initial fixation and permeabilization step resulted in positive staining by both anti-CB1 and anti-CB2 mAbs, but not by their respective isotype controls. Functional receptor protein was confirmed by assaying the capacity for cannabinoids to inhibit forskolin-induced changes in cAMP. Addition of THC, a pan-agonist with equal affinity for CB1 and CB2, blocked forskolin-induced cAMP in both transduced CHO-CB2 cells and in fresh human monocytes. In addition, this effect was recapitulated by exposure to JWH-015, a selective CB2 agonist, and the effects of THC were completely blocked by SR144528, a selective CB2 antagonist. These findings confirm reports that CB2 predominates as the functional cannabinoid receptor pathway in human monocytes and add the caveat that receptor expression occurs at an intracellular location rather than on the cell surface. Monocytes act as myeloid precursors that can differentiate along a number of functionally distinct pathways depending upon their interaction with cytokines, growth factors, infectious signals and other regulatory mediators . When driven to differentiate into monocyte-derived DC under the influence of GM-CSF and IL-4 , their function can also be modulated by a variety of factors . Concurrent exposure to IL-6 and macrophage-colony stimulating factor can divert differentiation toward macrophages instead of DC . Transforming growth factor -β and IL-23 promote the development of DC that promote Th17 biased responses . IL-10 promotes tolerogenic and Th2-promoting features ,cannabis grow racks while a variety of toll-like receptor ligands and immuno stimulatory cytokines will promote DC that stimulate effector/memory T cells . In this setting, we hypothesized that exposure to THC during the process of DC differentiation would provide valuable insight regarding its immuno regulatory properties. Further, given the immuno suppressive effects that cannabinoids have on antigen-specific T cell responses in animals in vivo and on human T cell activation in vitro , we hypothesized that cannabinoids might render DC tolerogenic or otherwise skew their stimulatory activity. Only a few studies have examined the interaction of cannabinoids with DC and in most cases the focus has been on murine models or on the effects of cannabinoids on differentiated DC . Do et al. suggested that THC can impair immune responses by inducing DC apoptosis. However, they studied mouse bone marrow-derived DC and apoptosis occurred primarily when THC concentrations exceeded 5 μM. In our studies, immuno regulatory effects on human monocyte-derived DC were observed at lower THC concentrations , more akin to peak levels that occur in the blood of marijuana smokers , and had no effect on cell recovery or surface staining by Annexin-V.

Instead of apoptosis, we observed broad-ranging effects of THC on the expression of MHC class II and costimulatory molecules, and the capacity for antigen uptake and IL-12 production. Furthermore, DC that had been exposed to THC during their in vitro differentiation were impaired in their capacity to activate T cells – including both CD4+ and CD8+ responders. T cell proliferation and the acquisition of a memory/effector phenotype were both impaired as was the release of Th1 cytokines. These effects of THC on the capacity for monocyte-derived DC to stimulate T cells are almost identical to the direct effects of THC on T cell activation , suggesting a coordinated immuno regulatory effect. It is interesting that other immuno suppressive factors, including IL-10 and TGF-β3, share this capacity to act in a coordinated manner on both DC and T cells . As is the case with IL-10–/– knockout mice , CB1–/–/CB2–/– double-knockout mice exhibit elevated levels of activated T cells and respond to antigen challenges by producing a higher number of activated effector cells and stronger IFN-γ responses . Collectively, these findings suggest an intrinsic role for endocannabinoid signaling as a homeostatic regulator of T cell activation. There are a number of critical features that develop during the transition from monocytes into DC that enable them to activate antigen specific T cells . Among these are high levels of antigen expression in the context of cell-surface MHC, the upregulation of adhesion and costimulatory molecules, and the elaboration of immuno stimulatory cytokines.Exposure to THC during the differentiation of monocyte-derived DC impaired antigen uptake and prevented the normal upregulation of MHC class II. These findings are consistent with earlier reports by McCoy et al. , where THC was found to impair the presentation of whole hen egg lysozyme, which required uptake and processing, but not the presentation of its immunodominant peptides, which bound directly to existing cell surface MHC. Dendritic cells that present antigen in the absence of adequate costimulatory molecules cannot fully activate T cells and may contribute to the development of T cell anergy . The inhibitory effects of THC on the expression of CD40, CD86 and other costimulatory molecules likely contributed to the failure of THC-DC to stimulate T cell proliferation. Finally, the relative production of IL-10 and IL-12 by DC plays a central role in their capacity to activate either Th1 or Th2 responses. In our studies, THC-DC produced only limited amounts of IL-12 but normal levels of IL-10. Lu et al. reported a similar suppressive effect of THC on the expression of MHC and costimulatory molecules and on production of IL-12 by mouse bone marrow-derived DC that had been infected with Legionella pneumophila. While these findings add to other compelling evidence that cannabinoids can exert important immuno suppressive effects, clinical evidence that marijuana smoking significantly impairs immune function in humans is limited. One explanation may be that inhaled THC never produces sufficient systemic levels, or that exposures may not be sustained for a sufficient period of time. to mediate immuno suppressive effects . Another explanation may be that the effects are short-lived or counterbalanced by the presence of other immune regulatory factors. The study of purified cells in vitro culture does not adequately replicate the complex environment that occurs during an immune challenge in vivo. In this study we hypothesized that the processes of DC activation and cytokine exposure that occur in response to an infectious challenge might modulate the impact of THC. Exposing DC and THC-DC to heat-killed and fixed SAC for 18– 24 h enhanced their capacity for Tcell activation; an effect that was more pronounced with THC-DC than with control DC. Adding IL-12 and IL-15 to the DC:T cell co-culture also enhanced T cell activation and proliferation, but these effects occurred equally with control and THC-DC.