Monthly Archives: July 2023

Recent alcohol use was assessed via the HNRP Substance Use History form

Considering that majority of PWH do not report heavier drinking compared to the general population , there is a need for a more comprehensive understanding of the impact of low-risk alcohol consumption among PWH. Examination of whether low-risk drinking exerts differential neurocognitive effects based on HIV serostatus is particularly salient given the increasingly popular recommendations for older adults to follow certain nutritional guidelines . Given that HIV disease can enhance vulnerability to physiological damage from environmental stressors , there may be no level of alcohol associated with better neurocognitive functioning among PWH. Advancing age is independently associated with a higher risk of neurocognitive and neurodegenerative diseases including Alzheimer’s Disease and its precursor mild cognitive impairment . Despite use of combination antiretroviral therapy, older PWH remain particularly vulnerable to HIV-associated neurocognitive impairment and neurodegenerative diseases associated with aging . Considering alcohol consumption is common among PWH, and with advancing age these persons are at a heightened risk for neurocognitive impairment, the present study examined associations between the non-linear effect of recent low-risk alcohol consumption and HIV status on global and domain-specific neurocognitive outcomes. Within the range of low-risk drinking, we hypothesize a curvilinear association between recent alcohol consumption and neurocognition among HIV- individuals,vertical growing racks cost such that intermediate levels of low-risk drinking will be associated with better neurocognitive function compared to non-drinkers and heavier levels; however, we do not expect this curvilinear association among PWH.

Participants included 310 PWH and 89 HIV- older adults enrolled in NIH funded research studies at the University of California, San Diego HIV Neuro behavioral Research Program from 2003-2016. Participants were recruited from the greater San Diego area by the HNRP. Regulatory approval was obtained from the University of California San Diego Institutional Review Board prior to the start of protocol implementation. We have previously published several papers using other aspects of these data including medication adherence, age of first alcohol use, and neurocognitive function . The current study represents a secondary analysis of baseline alcohol use and neurobehavioral data from the HNRP. Exclusion criteria for the current analysis included 1) self reported current or past diagnosis of a psychotic or mood disorder with psychotic features; 2) presence of a neurological condition that could impair neurocognitive function ; 3) positive urine toxicology for illicit drugs or evidence of alcohol intoxication by Breathalyzer test on the day of testing; 4) current diagnosis of AUD; 5) current diagnosis of non-alcohol substance use disorders ; 6) recent “at risk” alcohol consumption as defined per the National Institute on Alcohol Abuse and Alcoholism criteria for “at risk” drinking ; and 7) aged 49 years and younger. The UCSD Institutional Review Board approved this study, and all participants provided written informed consent to participate. Current and lifetime mood and substance use disorders were assessed via The Composite International Diagnostic Interview , a fully-structured, computer-based interview . Diagnoses were made in accordance with DSM-IV criteria, as the parent grants from which baseline data were collected were funded before the DSM 5 was published. DSM-IV criteria for alcohol abuse are met when participants report continued alcohol use despite recurring problems . DSM-IV criteria for alcohol dependence are met when participants endorse symptoms of tolerance, withdrawal, and impaired control over drinking . AUD was assigned when DSM-IV criteria for alcohol abuse or dependence was met in order to maintain consistency with DSM 5 criteria and nomenclature.This form is a modified timeline follow-back measure that assesses alcohol use metrics including the quantity and frequency of alcohol use in the last 30 days .

The variable capturing the total number of drinks consumed in the last 30 days was calculated by multiplying the daily rate of alcohol consumption by the number of consumption days in the last 30 days . Total number of drinks consumed in the last 30 days will be hereafter referred to as total drinks. Participants who reported no recent alcohol consumption were included in analyses as alcohol abstainers, with total drinks coded as 0. Participants were administered a well-validated, comprehensive battery of neuropsychological tests designed in accordance with the international consensus conference recommendations for HIV-associated Neurocognitive Disorders . The battery assesses seven neurocognitive domains: verbal fluency, executive function, processing speed, learning, delayed recall, working memory, and motor skills. Individual test raw scores were converted into demographically-adjusted Tscores , which were then averaged across the entire battery and within each domain to derive mean global and domain-specific T-scores, respectively . HIV group differences on demographic, psychiatric, neurocognitive, and alcohol use characteristics were compared using independent t-tests, Wilcoxon tests, and Chi-square statistics as appropriate. Separate multiple linear regressions examined the interaction between the quadratic effects of total drinks and HIV status on global and domain-specific T-scores. Demographic variables that significantly differed by HIV status at a p < .05 threshold ) were included as covariates. Considering the high prevalence of lifetime AUD in both persons with and without HIV, lifetime AUD was included as a covariate. Additionally, diagnosis of a lifetime non-alcohol substance use disorder was included as a covariate to account for potential confounding effects of non-alcohol substance use on neurocognitive outcomes. A follow-up analysis was conducted for any model that did not reveal a significant or trend-level interaction term between the quadratic effect of total drinks and HIV status.

The follow-up analysis examined the interaction between the linear effect of total drinks and HIV status on domain-specific T-scores, covarying for demographic variables included in primary regression analyses. As a secondary followup analysis, the independent effects of total drinks and HIV status were examined for any model that did not show a significant interaction term , covarying for the same demographic variables inprimary regression analyses. Regression analyses were performed using JMP Pro version 14.0.0 . Exploratory analyses, stratified by HIV status, employed the Johnson-Neyman technique to identify specific regions along the quadratic curve of total drinks at which total drinks had a statistically significant effect on neurocognition . Compared to simple slope analyses that describe quadratic effects based on how the effect of a predictor changes at different levels of that predictor, the J-N technique computes the full range of values for which the predictor slope is statistically significant. These boundaries are referred to as regions of significance. Region of significance analyses were computed using the jtools package in R statistical software . Considering long-term heavy alcohol use may have ongoing neurocognitive effects, an additional exploratory analysis examined the association between lifetime history of AUD and alcohol abstinence using a Chi-squared statistic. Finally, we explored the associations between HIV disease characteristics and global neurocognitive function using independent t-tests. We have included any significant variables as covariates in the linear regression analysis by HIV-serostatus. These analyses were performed using JMP Pro version 14.0.0 . Demographic, psychiatric, substance use, alcohol use, HIV disease, and neurocognitive characteristics by HIV group are presented in Table 1. The PWH group was significantly younger,vertical indoor growing system had a higher proportion of males, and had higher rates of current MDD and lifetime MDD than the HIV- group. With respect to recent alcohol consumption, PWH on average reported more drinks per drinking day and more drinking days within the last 30 days than HIV- individuals, yet groups were comparable on all other alcohol use characteristics. In regards to neurocognition, univariably PWH had significantly lower global function, verbal fluency, executive function, processing speed, working memory, and motor skills T scores than the HIV- group. Results of linear regressions examining the interaction between the quadratic effect of total drinks and HIV status on neurocognitive outcomes are presented in Table 2. In these adjusted models, the interaction between the quadratic effect of total drinks and HIV status was significant for global function , executive function , learning , delayed recall , and motor skills . With respect to covariates, older age, a lifetime history of MDD, and a lifetime history of a non-alcohol substance use disorder were associated with worse neurocognitive performance across multiple domains. Follow-up analyses were conducted to examine the interaction between the linear effect of total drinks and HIV status on neurocognitive outcomes that showed no significant or trend-level interaction term. Similar adjusted linear regression models revealed no significant interaction effects between total drinks and HIV status on domainspecific neurocognition . To further examine the independent effects of total drinks and HIV status on neurocognition, linear regression models examined the effects of HIV status, total drinks, and covariates from previous models on neurocognitive outcomes . In these adjusted models, HIV status was significantly associated with verbal fluency , processing speed , and working memory , such that PWH performed significantly worse than their HIV- counterparts. There were no detected effects of total drinks on domain specific neurocognitive outcomes. Additional follow-up analyses on domains that revealed significant quadratic associations were stratified by HIV serostatus . Results exploring the associations between HIV disease characteristic and global neurocognitive function suggest a significant negative association between estimated duration of HIV disease and global neurocognitive function .

Therefore, estimated duration of disease was included as a covariate in the linear regression model for PWH. The number of total drinks was not associated with neurocognition in PWH. Estimated duration of disease approached significance for global function . In the HIV- group, results indicated significant quadratic effects of total drinks on global function , executive function , learning , delayed recall , and motor skills . We applied the J-N technique to inspect these significant changes in the slope of total drinks on neurocognition as a function of total drinks within the HIV- group . Total drinks demonstrated positive, statistically significant associations with neurocognition at the lower end of “low-risk” drinking . Conversely, total drinks demonstrated negative, statistically significant associations with neurocognition at the higher end of “low-risk” drinking . Although there was a significant quadratic association between total drinks and delayed recall, the negative slope did not reach statistical significance. Finally, to examine potential ongoing neurocognitive effects of lifetime AUD among alcohol abstainers, a Chisquare statistic was calculated. Results indicate no significant association between having a lifetime history of AUD and currently abstaining from alcohol,2 = 1.11, p = .292. Our study is among the first to examine the curvilinear association between recent “low-risk” alcohol consumption and neurocognition among persons with and without HIV. Among HIV individuals, the association between low-risk drinking and neurocognition expectedly followed an inverted-J shaped pattern, with better neurocognition occurring at intermediate levels of “low-risk” drinking compared to alcohol abstinence and heavier consumption. Specifically, region of significance analyses indicated a positive slope of alcohol consumption on global neurocognitive function when the range of total drinks was zero to 18 drinks, whereas a negative slope emerged when the range of total drinks was 52 to 60 drinks; suggesting a potentially innocuous range between 18 to 52 drinks per month for HIV- individuals. This global effect was driven by abilities supported by frontal brain regions where alcohol metabolism is thought to be particularly active . Additionally, consistent with our hypotheses, there was no quadratic association between level of low-risk alcohol consumption and neurocognition among PWH. This suggests the presence of other factors that may supersede the potentially beneficial neurocognitive effects of low-risk alcohol consumption in the context of HIV. For example, age was significantly associated with global function, executive function, learning, and delayed recall in PWH, despite using age-adjusted T-scores in analyses.Extant literature suggests that the inverted-J shaped association is not unique to neurocognition, which may point towards possible mechanisms underlying the neuroprotective effect of low-risk alcohol consumption. For example, evidence supports a cardioprotective effect of low-risk alcohol consumption including a reduced risk of coronary heart disease, myocardial infarction, ischemic stroke, peripheral arterial disease, and all-cause mortality . There is a higher risk among alcohol abstainers and when alcohol consumption is high, and lower risk when alcohol consumption is low . Although our data does not directly measure pathways underlying a potential neuroprotective effect of low-risk alcohol consumption, including its specificity to HIV- adults, several plausible biopsychosocial mechanisms can be drawn from the extant literature. From a biological perspective, low-risk alcohol use has been linked to increased high-density lipoprotein levels and may carry antithrombotic, antioxidative, and anti-inflammatory effects that benefit the neurovascular unit . Additionally, alcohol may directly enhance learning and executive function via stimulation of acetylcholine in the prefrontal cortex and hippocampus .

A few studies have examined shifts in substance use during these two turning points

In our third and fourth sets of models, we adjusted for age and parenting variables and age and peer variables, respectively. In our fifth set of models, we adjusted for covariates that were significant in models 2-4. Third, we tested whether age modified the effect of our exposures by including a product term between exposure and each age spline. Significant effect measure modification was then probed to clarify how the association between psychiatric problems and substance use changed across the age splines. We conducted a sensitivity analysis to establish the directionality of the association between psychiatric problems and substance use. We thus estimated, with linear fixed effects models, the effect that changes in one-year-lagged substance use had on change in psychiatric problem domains in the following year. We followed the same modeling strategy for these models as we did with our primary models. We adjusted for groups of confounders as described above, first adjusting for SES, psychiatric problem domains that were not modeled as the outcome , and measures of substance use that were not the exposure of interest . Next we adjusted for parenting variables and peer variables, respectively. Finally, we adjusted for covariates that were significant in any of the previous groups of confounder models. Covariates were lagged one year prior to the exposure measure , to avoid blocking the causal pathway between substance use and psychiatric problems. Table 1 shows mean substance use and psychiatric problem counts over time, as well as demographic characteristics at baseline. The reports of particular informants in our psychiatric problem measures did not influence the associations between psychiatric problems and substance use . Table 2 displays the exponentiated coefficients and confidence intervals of quasiPoisson models,vertical greenhouse growing systems which can be interpreted as rate ratios. Table 2 shows the rate of substance use associated with a one-unit within-subject change in lagged psychiatric problems.

Changes in lagged conduct problems were positively associated with changes in marijuana frequency. During years in which adolescents experienced a one-unit increase in conduct problems, the rate at which they smoked marijuana the following year increased 1.03 times : 1.01, 1.05. For a standard deviation change in conduct problems, this is equivalent to a 1.15 times higher rate of marijuana use frequency . The magnitude of this association did not change appreciably after adjusting for potential confounders, including alcohol quantity and frequency, SES, affective and anxiety problems, parenting, and peer deviance. Changes in lagged conduct problems were also associated with changes in alcohol quantity, only after adjusting for peer deviance. During years in which adolescents experienced a one-unit increase in conduct problems, the rate of their average alcohol consumption per occasion the following year increased by 1.01 . For a standard deviation change in conduct problems, this is equivalent to a 1.05 times higher rate of alcohol use . Associations of all covariates with substance use are presented in Appendix C, Table C1. Table 3 presents results for tests of effect measure modification of the association between conduct problems and marijuana frequency and alcohol quantity by age. Because splines are polynomial functions, there is no simple quantitative interpretation of individual effect modification terms; however, the significance of the coefficients implies that the associations between lagged conduct problems and marijuana frequency, and lagged conduct problems and alcohol quantity, differed by age. For ease of interpretation we present these results in Figure 1, which shows the predicted values of substance use outcomes associated with minimum, mean, and third-quartile levels of lagged conduct disorder T-scores, over time. Compared to minimal changes in lagged conduct problems, adolescents with mean or third-quartile levels of change in lagged conduct problems show markedly different marijuana frequency trajectories, which become the most disparate at ages 17-19.

Compared to minimal changes in lagged conduct problems, adolescents with mean or third-quartile levels of change in lagged conduct problems show higher alcohol quantity in early adolescence but lower alcohol quantity in later adolescence. The results of our sensitivity analysis are presented in Table 4 and 5, and Figure 2. Table 4 displays the change in psychiatric problems associated with a one-unit change in lagged substance use in the prior year. There was one reverse association: while changes in lagged anxiety problems were not associated with changes in substance use, the opposite did occur: changes in lagged alcohol quantity in the past year were positively associated with changes in anxiety problems. During years in which adolescents experienced a one-unit increase in the average quantity of alcohol consumed when drinking, their anxiety problems T-score increased the following year by 0.12 . For a standard deviation change in average alcohol quantity, this is equivalent to an anxiety T-score increase of 0.3 . The magnitude of this association did not change appreciably after adjusting for potential confounders. Associations of all lagged covariates with psychiatric problems are presented in Appendix C, Table C2. Table 5 presents results for tests of effect measure modification of the association between lagged alcohol quantity and anxiety problems by age, and Figure 2 shows the predicted values of anxiety problem T-scores associated with minimum, mean, and third-quartile levels of lagged alcohol quantity, over time. Adolescents show a decline in anxiety problems throughout adolescence, and little difference by the magnitude of fluctuations in lagged alcohol quantity. However, deviations arose at ages 13-14 and 17-19, where those who exhibited a mean or third-quartile level of increase in lagged alcohol quantity showed slower declines in anxiety problems compared to those who did not increase alcohol intake over time. This study focused on the longitudinal relationship between changes in psychiatric problems and changes in substance use one year later. However, the temporal resolution of this relationship may occur on a much shorter time frame – that is, changes in psychiatric problems may have immediate effects on substance use . To approximate effects on such a short time frame, we also examined the association between change in psychiatric problems and contemporaneous change in substance use. We followed the same modeling strategy as in our primary models, but adjusted for one-year-lagged versions of all covariates . Table 6 presents the rate of contemporaneous changes in substance use frequency associated with changes in psychiatric problem T-score.

In fully adjusted models, within person changes in the conduct problems T-score were associated with contemporaneous changes in marijuana frequency, alcohol frequency, and alcohol quantity. Within-person changes in the affective problems T-score were associated with contemporaneous changes in alcohol quantity. Associations of all covariates with substance use in the contemporaneous models are presented in Appendix C, Table C3. This study examined whether adolescent males tend to escalate their substance use following an increase in their psychiatric problems, and identified periods during adolescence when such associations may be particularly strong. We found that when youth experienced an increase in conduct problems, they showed an increase in the frequency of marijuana use and quantity of alcohol use in the following year. Fluctuations in conduct problems and affective problems may have an influence on alcohol use on a shorter time scale: changes in conduct problems and affective problems were concurrently associated with changes in alcohol frequency and quantity, respectively, in the same year, but not in the subsequent year. The specific effect of conduct problems on substance use is consistent with the notion that conduct disorder problems and substance use constitute elements within a broader externalizing spectrum.Although numerous longitudinal studies have demonstrated that youth with psychiatric problems are at increased risk for using and abusing substances , few have examined whether adolescents tend to increase their substance use following periods when they experience an increase in their psychiatric problems .By focusing on within-individual change, we were able to rule out the possibility that selection effects and stable individual differences between youth with differing levels of psychiatric problems and substance use accounted for the observed association between psychiatric problems and substance use. Further,vertical grow kit the use of an extensive set of measures of potential time-varying covariates , allayed concerns that the associations were confounded by time-varying factors. The strength of the associations between conduct disorder problems and marijuana and alcohol use were relatively modest, suggesting that a substantial change in conduct problems would have to occur to produce a substantial within-individual change in substance use. This is consistent with prior studies that have tried to predict change over time in substance use 10. Substance use is shaped by multiple risk factors working together – hence, any one risk factor is likely to make a modest contribution to within individual fluctuations in substance use. This study examined the bidirectional nature of the association between psychiatric problems and substance use, and found evidence of a reverse effect of substance use on psychiatric problems. While increases in anxiety and depression did not result in increases in substance use, increases in the quantity of alcohol use did result in increases in anxiety problems. The effect of alcohol use on anxiety problems is consistent with prior studies that have found that substance use increases the risk for anxiety disorders.There are at least two possible explanations for this observed pattern. First, substance use can increase exposure to economic and social problems that increase the risk for anxiety, including crime, unemployment, loss of income, and relationship problems.Second, substance use can cause neurochemical changes which increase vulnerability to an anxiety disorder.

The effect of conduct disorder problem fluctuations on quantity of alcohol use was strongest in early adolescence, while the effect of conduct disorder changes on marijuana use was strongest in late adolescence . At the same time, the effect of quantity of alcohol use on anxiety was strongest in early and late adolescence . Two points are worth noting about this pattern. First, life transitions such as the shift from middle school to high school in early adolescence and the shift from high school to college in late adolescence may escalate existing challenges produced by fluctuations in psychiatric problems or substance use.For example, Jackson et al. found that the prevalence of heavy drinkers more than doubled in the transition to high school and that this change was especially pronounced for youth with more problem behaviors. Studies of the transition from adolescence to young adulthood have also found that post-secondary school attendance predicted higher rates of substance use, and that the relationship between conduct problems and substance use was stronger in late adolescence than in middle adolescence.Pronounced effects of psychiatric problem and substance use fluctuations at times of transition would be consistent with an accentuation model , whereby the stress of the transition and the demands of the new context reduce contextual limitations on individual proclivities, potentially allowing for fluctuations in psychiatric problems to have a stronger effect on substance use, and vice versa. Second, the larger effect of conduct disorder on alcohol use at earlier ages and on marijuana use at later ages may reflect the developmental timing of these two substances. Drinking starts in early to mid-adolescence; hence, fluctuations in conduct problems in early adolescence may lead to involvement with alcohol use, as the drug that is most easily available in families and peer groups. In contrast, marijuana use typically starts in mid- to late adolescence, so the influence of conduct problems on marijuana use may increase as access to marijuana becomes easier in later ages.The study findings should be taken in light of the following limitations. First, all participants in the Pittsburgh Youth Study are male; hence, we could not examine the relationship between psychopathology and substance use quantity and frequency among girls. Second, all participants were selected from Pittsburgh public schools, which potentially limits the generalizability of the findings beyond this area. Third, half of the sample was composed of high-risk boys: this limited our ability to infer to the general population, but also provided us with greater power to detect an association between fluctuations in psychiatric problems and substance use. Fourth, while we examined measures of psychiatric problems that are consistent with DSM diagnoses, these measures did not explicitly measure diagnostic criteria for DSM disorders. Grouping symptoms into “affective”, “anxiety” and “conduct” problem categories might merge stronger individual disorders with non-predictors of substance use, leading to an underestimate of the association between psychiatric problems and substance use. However, it is increasingly recognized that psychiatric problems are best conceptualized as falling on a continuum of severity rather than representing a discrete taxon. Fifth, a low base rate prevented us from examining the predictors of fluctuation in the level of use of other illicit drugs. Sixth, the prevalence of marijuana use has increased since the completion of this study.

Cognition is a key area of research in the field of alcohol use disorders

We examined factors associated with utilization as conceptualized by the Andersen model of healthcare utilization , which proposes that utilization is determined by predisposing need and enabling factors . We hypothesized that psychiatric comorbidity would be associated with greater use of health services, and that members with higher deductibles would be less likely to initiate SUD and psychiatry treatment but would have higher emergency department and inpatient utilization than those without deductibles. As with earlier studies , which indicate that SUD diagnosis is often precipitated by a critical event such as an ED visit, we expected that post diagnosis utilization would be highest in the period immediately following diagnosis but would likely decrease over time, although trajectories would vary by type of utilization. Knowing how these factors are associated with use of healthcare can be highly informative to future healthcare reform and behavioral health services research. Kaiser Permanente Northern California is an integrated healthcare system serving approximately 4 million members . The membership is racially and socioeconomically diverse and representative of the demographic of the geographic area . SUD treatment is provided in specialty clinics within KPNC, which patients can access directly without a referral. The group based treatment model is similar to outpatient treatment programs nationwide. Treatment sessions take place daily or four times a week, depending on severity, for nine weeks . Treatment in psychiatry includes assessment,vertical growing towers individual and group psychotherapy, and medication management . KPNC is not contracted to provide SUD care or intensive psychiatry treatment for Medicaid patients and those patients are referred to county providers. The University of California, San Francisco and Kaiser Permanente Northern California Institutional Review Boards approved the study and approved a waiver of informed consent.

We identified common chronic medical conditions , many of which are known to be associated with SUDs using ICD-9/10 codes recorded within the first year after initial enrollment. Conditions included asthma, atherosclerosis, atrial fibrillation, chronic kidney disease, chronic liver disease, chronic obstructive pulmonary disease, coronary disease, diabetes mellitus, dementia, epilepsy, gastroesophageal reflux, heart failure, hyperlipidemia, hypertension, migraine, osteoarthritis, osteoporosis and osteopenia, Parkinson’s disease or syndrome, peptic ulcer, and rheumatoid arthritis. Patients with chronic medical conditions utilize more health services than patients without such conditions , which may influence their decision to choose a plan with a lower deductible if given an option , so we included this covariate to control for confounding. Deductibles are features across different benefit plans, including commercial plans, but are more common in ACA benefit plans. The individual deductible limit is the amount the individual must pay outof-pocket for health expenses before eligibility for health plan benefits. At KPNC, there are many types of benefit plans that include deductibles. Patients with deductible plans that do not include SUD as a covered benefit are responsible for bearing the cost of those services until their deductible is reached, and/or the accumulating cost of copays for multiple visits as part of the SUD care model. We did not include type of insurance as a covariate due to its collinearity with deductible limits and enrollment via the ACA exchange . We summarized utilization data into 6-month intervals, and we examined trends in health service utilization over 36 months after patients received an SUD diagnosis with Chi-squared tests using 6-month intervals. Using multi-variable logistic regression, we examined associations between deductible limits, enrollment via the California ACA Exchange, membership duration, and psychiatric comorbidity; and the likelihood of utilizing health services in the 36-month follow-up period, controlling for patient demographic characteristics and chronic medical comorbidity.

We also evaluated whether enrollment via the California ACA exchange moderated the associations between deductible limits and the likelihood of utilization by adding interaction terms to the multi-variable models. We estimated the associations with deductible limits for each enrollment mechanism by constructing hypothesis tests and confidence intervals on linear combinations of the regression coefficients from these models. To account for correlation between repeated measures, we used the generalized estimating equations methodology . We censored patients at a given 6-month interval if they were not a member of KPNC during that time. We conducted a sensitivity analysis to determine whether high utilizers leaving the health system influenced the observed pattern of decreased utilization from the 0–6 month to the 6–12 month follow-up periods. Using Chi-squared tests, we compared utilization during the 0–6 month period between patients who remained in the cohort and patients who disenrolled from KPNC at 6–12 months. We hypothesized that if the censored group had greater utilization than the noncensored group, then there would be evidence of high utilizers leaving the health system. We also conducted Chi-squared tests to determine whether censorship was associated with deductible limits and enrollment mechanisms. We conducted all analyses using SAS v9.4. We assessed significance at a p-value < .05. This study examined longitudinal patterns of healthcare utilization among SUD patients and their relationships to key aspects of ACA benefit plans, including enrollment mechanisms and deductible levels. We anticipated that the increase in coverage opportunities that the ACA provided would bring high-utilizing patients into health systems, driving up overall use of healthcare. Consistent with prior studies of SUD treatment samples that have found elevated levels of healthcare utilization either immediately before or after starting SUD treatment , results of our longitudinal analysis showed that utilization among people with SUDs was highest immediately after initial SUD diagnosis at KPNC, and declined to a stable level in subsequent years.

This suggests that the initial high utilization may be temporary. Our sensitivity analysis suggested that this result was not due to high utilizers leaving the KPNC healthcare system. This overall trend in utilization is a welcome finding, and consistent with the intent of the ACA to increase access to care; however, the subsequent decrease in utilization could also signify that patients are disengaging from treatment. Although we cannot specifically attribute the initial levels of utilization to lack of prior insurance coverage, as we did not have data on prior coverage, we found that individuals with fewer than 6 months of membership before receiving an SUD diagnosis were more likely to utilize primary care and specialty SUD treatment than those who had 6–12 months of membership. This suggests that future healthcare reforms that expand insurance coverage for people with SUDs might also lead to short-term increases in utilization for a range of health services. Deductibles are a key area of health policy interest given the growing number of people enrolling in deductible plans post-ACA. As anticipated, higher deductibles had a generally negative association with utilizing healthcare in this population. We found that patients with high deductibles had lower odds of using primary care, psychiatry, inpatient, and ED services than those without deductibles. Additionally, we found the associations between high deductibles and likelihood of utilizing primary care and psychiatry were strongly negative among ACA Exchange enrollees. Although it is somewhat difficult to gauge the clinical significance of these specific results,vertical growing racks the strength of the odds ratios for primary care and psychiatry access gives some indication of the potential impact. The associations of high deductibles with primary care and psychiatry access is worrying given the extent of medical and psychiatric comorbidities among people with SUDs . Although we found more consistent associations for higher deductibles and less healthcare initiation, it is possible that even a modest deductible could deter patients from seeking treatment . From a public policy and health system perspective, the possibility that deductibles could prevent people with SUDs from accessing any needed medical care is a cause for concern. Consistent with prior findings , our results suggest that high deductibles have the potential to dissuade SUD patients from accessing needed health services, and that those who enroll via the ACA exchange may be more sensitive to them. This could be attributable to greater awareness of coverage terms due to the mandate that exchange websites offer clear, plain-language explanations to compare insurance options . In contrast, high deductibles were associated with a greater relative likelihood of SUD treatment utilization. However, this association existed only among patients who enrolled via mechanisms other than the ACA Exchange. It is possible that individuals with emerging or unrecognized substance use problems may have selected higher deductible plans at enrollment due to either not anticipating use of SUD treatment, which is often more price-sensitive relative to other medical care , or not being aware of the implications of deductibles. However, once engaged in treatment, individuals with high deductibles may have been motivated to remain there. A contributing factor could also be that such patients were required to remain in treatment either by employer or court mandates, which are common and are associated with retention . The varying associations between deductibles and different types of health service utilization by enrollment mechanisms highlight the need for future research in this area. Insurance exchanges provide access to tax credits, a broader range of coverage levels, and information to assist in healthcare planning that might be less easily accessible through other sources of coverage, e.g., through employers . In our sample, Exchange enrollment was associated with greater likelihood of remaining a member of KPNC, did not demonstrate an adverse association with routine care, and was associated with lower ED use.

However, primary care and psychiatric services use were similar across enrollment types, even within low and high deductible limits. Prior studies have found that health plans offered through the ACA Exchange are more likely to have narrow behavioral health networks compared to other non-Exchange plans and primary care networks , which raises concerns about treatment access. For this health system, that concern appears unfounded. Psychiatric comorbidity was associated with greater service use of all types. Several prior studies have also found that patients with psychiatric comorbidity use more health services than those with SUD alone . Similar to our results, a recent study based in California found that after controlling for patient-level characteristics, the strongest predictors of frequent ED use post-ACA included having a diagnosis of a psychiatric disorder or an SUD . While the ACA was not expected to alter this general pattern, the inclusion of mental health treatment as an essential benefit was intended to improve availability of care and to contribute to efforts to reduce unnecessary service utilization. Our investigation confirms the ongoing importance post-ACA of psychiatric comorbidity and suggests that future efforts in behavioral health reform must anticipate high demand for healthcare in this vulnerable clinical population. It is also worth noting that nonwhite patients were less likely to initiate SUD and psychiatry treatment. Race/ethnic disparities in access to care are a longstanding concern in the addiction field . Some expected these disparities to be mitigated postACA . Findings on race/ethnic differences are similar to what has been observed in other health systems ; although, few studies have examined associations post-ACA. One prior study among young adults with SUD and psychiatric conditions post-ACA found modest ethnic disparities in lack of coverage between whites and other ethnic groups ; although, another study of young people more broadly found larger gains in coverage among Hispanics and Blacks relative to whites . The race/ethnic disparities in SUD and psychiatry treatment initiation in this cohort, in which overall insurance coverage was not a barrier but specific mechanisms could be, highlight the importance of addressing this complicated challenge to health equity. Alcohol use disorders are a major public health problem and constitute the most prevalent forms of addiction in veterans.Cognitive impairment is well-documented in individuals with alcohol use disorders,and alcohol-related clinical outcomes are moderated by a range of cognitive impairments.Cognition plays an important role in clinical outcomes, yet recognizing and screening for cognitive impairment in addiction populations remains uncertain and difficult.A comprehensive neurocognitive evaluation may not be routinely feasible in addiction settings, as these evaluations are often time intensive and resource consuming.When managing veterans with alcohol use disorders, quicker adjunctive tools that clinicians could use to screen for those individuals at higher risk of cognitive impairment are needed. One potential tool that may fulfill this role is the alcohol use biomarker. Alcohol use biomarkers are broadly divided into indirect and direct biomarkers.The indirect biomarkers include aspartate aminotransferase , alanine aminotransferase , mean corpuscular volume , γ-glutamyltransferase , and carbohydrate-deficient transferrin .

The risk pathway from anhedonia to marijuana use may be incremental to risk of other drug use

A secondary aim was to test whether these putative risk pathways were amplified or suppressed among pertinent sub-populations and contexts. Associations of affective disturbance and other risk factors with adolescent substance use escalation have been reported to be amplified among girls, early- onset substance users and those with substance-using peers . We therefore tested whether associations between anhedonia and marijuana use were moderated by gender, history of marijuana use prior to the study surveillance period at baseline and peer marijuana use at baseline.To characterize trajectories of anhedonia and marijuana use across time, latent growth curve modeling was applied to estimate a baseline level and linear slope for both anhedonia and marijuana use. Univariate latent growth curve models were first fitted for marijuana use and anhedonia separately to determine the shape and variance of trajectories. A two-process parallel latent growth curve model was then fitted, which simultaneously included growth factors for anhedonia and marijuana use after adjusting for covariates listed above and including within-construct level-to-slope associations. The parallel process model was constructed to test: bidirectional longitudinal associations by including directional paths from baseline anhedonia level to marijuana use slope as well as baseline marijuana use level to anhedonia slope; and non-directional correlations between baseline levels of anhedonia and marijuana use and between anhedonia slope and marijuana use slope. Significant directional longitudinal paths between anhedonia and marijuana use in the overall sample were tested subsequently in moderation analyses of differences in the strength of paths across sub-samples stratified by moderator status using a multi-group analysis.

Analyses were performed using Mplus with the complex analysis function to adjust parameter standard errors due to clustering of the data by school. To address item- and wave-level missing data,commercial growing system full information maximum likelihood estimation with robust standard errors was applied. Continuous and categorical ordinal scaled outcomes were applied for anhedonia and marijuana use, respectively. The Akaike information criterion and the Bayesian information criterion were used to gauge model fit in which lower values represent better-fitting models. For moderator analyses, χ2 differences were calculated using log-likelihood values and the number of free parameters contrasting the model fit with equality constraints on the anhedonia–marijuana use path of interest across groups stratified by the moderator variable. Standardized parameter estimates and 95% confidence intervals are reported. Significance was set at α = 0.05 .Youth with higher levels of anhedonia at baseline were at increased risk of marijuana use escalation during early adolescence in this study. In addition, levels of anhedonia and marijuana use reported at the beginning of high school were associated cross-sectionally with each other. To the best of our knowledge, the only prior study on this topic found higher levels of anhedonia in 32 treatment-seeking marijuana users than 30 healthy controls in a cross-sectional analysis of French 14–20-year-olds who did not adjust for confounders. The current data provide new evidence elucidating the nature and direction of this association in a large community-based sample, which advances a literature that has addressed the role of anhedonia predominately in adult samples. The association of baseline anhedonia with marijuana use escalation was observed after adjustment of numerous possible confounders, including demographic variables, symptom levels of three psychiatric syndromes linked previously with anhedonia and alcohol and tobacco use. Consequently, it is unlikely that anhedonia is merely a marker of these other psychopathological sources of marijuana use risk or a non-specific proclivity to any type of substance use.

The temporal ordering of anhedonia relative to marijuana was addressed by the overarching bidirectional modeling strategy, which showed evidence of one direction of association and not the other direction.Ordering was confirmed further in moderator tests showing that the association of anhedonia with subsequent marijuana use did not differ by baseline history of marijuana use. Thus, differences in risk of marijuana use between adolescents with higher anhedonia may be observed in cases when anhedonia precedes the onset of marijuana use. Why might anhedonia be associated uniquely with subsequent risk of marijuana use escalation in early adolescence? Anhedonic individuals require a higher threshold of reward stimulation to generate an affective response and therefore may be particularly motivated to seek out pharmacological rewards to satisfy the basic drive to experience pleasure, as evidenced by prior work linking anhedonia to subsequent tobacco smoking escalation.Among the three most commonly used drugs of abuse in youth , marijuana may possess the most robust mood-altering psychoactive effects in young adolescents. Consequently, marijuana may have unique appeal for anhedonic youth driven to experience pleasure that they may otherwise be unable to derive easily via typical non-drug rewards. The study results may open new opportunities for marijuana use prevention. Brief measures of anhedonia that have been validated in youth, such as the SHAPS scale used here, may be useful for identifying teens at risk who may benefit from interventions. If anhedonia is ultimately deemed a causal risk factor, targeting anhedonia may prove useful in marijuana use prevention. Interventions promoting youth engagement in healthy alternative rewarding behaviors without resorting to drug use have shown promise in prevention, and could be useful for offsetting anhedonia-related risk of marijuana use update. Moderator results raise several potential scientific and practical implications.

The association was stronger among adolescents with friends who used marijuana, suggesting that expression of a proclivity to marijuana use may be amplified among teens in environments in which marijuana is easily accessible and socially normative. The association of anhedonia with marijuana use escalation did not differ by gender or baseline history of marijuana use. Thus, preventive interventions that address anhedonia may: benefit both boys and girls , aid in disrupting risk of onset as well as progression of marijuana use following initiation and be particularly valuable for teens in high-risk social environments. While anhedonia increased linearly over the first 2 years of high school on average, the rate of change in anhedonia was not associated with baseline marijuana use or changes in marijuana use across time. Given that anhedonia is a manifestation of deficient reward activity, this finding is discordant with pre-clinical evidence of THC induced dampening of brain reward activity and prior adult observational data, showing that heavy or problematic marijuana use is associated with subsequent anhedonia and diminished brain reward region activity during reward anticipation. Perhaps the typical level and chronicity of exposure to marijuana use in this general sample of high school students was insufficient for detecting cannabinoid-induced manifestations of reward deficiency. Longer periods of follow-up may be needed to determine the extent of marijuana exposure at which cannabinoid-induced reward functioning impairment and resultant psychopathological sequelae may arise. Strengths of this study include the large and demographically diverse sample, repeated-measures follow-up over a key developmental period, modeling of multi-directional associations, rigorous adjustment of potential confounders, high participation and retention rates and moderator tests to elucidate generalizability of the associations. Future work in which inclusion of biomarkers and objective measures is feasible may prove useful. Prevalence of heavy marijuana use was low in this sample, which precluded examination of clinical outcomes, such as marijuana use disorder. Students who did complete the final follow-up had lower baseline marijuana use and anhedonia, which might impact representativeness. Further evaluation of the impact of family history of mental health or substance use problems as well as use of other illicit substances, which was not addressed here, is warranted.Disturbed sleep is increasingly investigated as one of the most promising modifiable risk markers for psychotic disorders.It is a widely reported symptom that already tends to manifest in individuals at clinical high risk .Clinician- and self-described sleep reports in CHR studies are congruent with data derived from objective measures such as polysomnography,actigraphy,magnetic resonance imaging and sleep electroencephalograms,vertical grow systems emphasizing disturbed sleep not only as a prominent phenotype of psychotic illness, but as a potentially important biomarker. Yet, in the existing literature exploring sleep disturbance prior to overt psychosis onset, several important issues have remained unaddressed. First, while abnormal sleep patterns are known to manifest prior to conversion to psychosis,there is a paucity of evidence regarding the extent to which disturbed sleep independently contributes to conversion risk.

Overall, studies have shown that psychosis-risk groups experience a considerable amount of sleep disturbance; however, the two notable attempts to use CHR sleep patterns to predict conversion were limited by the cross-sectional nature of their sleep data and did not find a relationship.Second, although it has been suggested that disturbed sleep is associated with CHR symptoms,the few studies that have explored the specificity of subclinical psychotic symptoms most altered by sleep have been inconsistent. For example, in CHR youth,certain actigraphic measures of sleep were associated with positive symptoms but none were associated with negative symptoms, and in another study, the Structured Interview for Psychosis-risk Syndromes sleep disturbance score was associated with the discrete positive attenuated symptoms suspiciousness/persecutory ideas, perceptual abnormalities/hallucinations, and disorganized communication.However, in a third CHR study, several sleep variables assessed by the Pittsburgh Sleep Quality Index were associated with more severe negative symptoms and none with positive symptoms.As such, investigations have been discrepant and it is unclear whether the observed associations remain stable over time. Expanding upon studies that assessed sleep cross-sectionally,here we examine associations between sleep and CHR symptoms at multiple time points in at-risk cases and controls, a design used in only few prior studies.Third, it has not yet been established which sleep characteristics are most implicated in CHR symptomatology. In contrast to the studies that used non-specific sleep disturbance severity scales, the few studies that have examined the individual components of disturbed sleep in relation to CHR symptom changes revealed more specific relationships.For example, polysomnography measured sleep in CHR individuals showed longer sleep latency and REM-onset latency relative to controls.In another study,decreased bilateral thalamus volume was found in CHR youth when compared to controls, which was associated with greater latency, reduced efficiency, decreased quality, and increased overall sleep dysfunction score on the PSQI. The multidimensional nature of sleep may in part explain the variety of sleep risk factors described in the literature. Still, replication in large samples is required to identify the sleep characteristics most strongly related to clinical symptomatology. Fourth, it is unclear whether sleep affects symptom severity directly, or whether the association is influenced by factors such as cognitive deficits, stress, depression, and use of psychotropic medication—all of which have been associated with sleep disruption as well as with prodromal symptomatology.Cognitive deficits, a key aspect of psychotic disorders,are already evident in the prodromal period and can be exacerbated by sleep difficulties.Stress, which tends to be higher in individuals at CHR compared to controls,negatively impacts sleep quality, while restricted sleep can provoke stress as shown by activity of neuroendocrine stress systems.Depression, prevalent in CHR and in early phases of psychosis,is also closely linked to sleep disturbances such that both insomnia and hypersomnia are common symptoms as well as diagnostic criteria of the disorder.Some psychotropic medications may cause sedation or stimulation and thus will also be explored.Finally, to ascertain the feasibility of investigating sleep as a target for symptom amelioration, it is critical to determine the direction of the association between sleep and CHR symptoms. Some promising evidence includes bidirectional relationships between poor sleep and paranoia and poor sleep more strongly predicting hallucinations than the other way around in samples with non-affective psychotic disorders and high psychosis proneness.Certain actigraphy-measured circadian disturbances have predicted greater positive symptoms at one-year follow-up in CHR youth,and in a general population study, 24 h of sleep deprivation induced psychotic-like experiences and showed a modest association with impaired oculomotor task performance common in schizophrenia.The current analysis expands upon existing work as the first to examine longitudinal bidirectional relationships between discrete sleep characteristics, CHR symptoms, and conversion status in a large sample of CHR participants and non-clinical controls. Data were leveraged from the North American Prodrome Longitudinal Study -3,in which participants were prospectively tracked for two years. Specifically, we assessed: whether baseline sleep patterns predict conversion to psychosis; group differences between converters, CHR non-converters, and controls in the associations between sleep trends and CHR symptoms; specificity of the individual CHR symptom domains affected by sleep disturbance; which particular sleep items are most implicated in CHR symptom changes; cognitive impairment, daily life stressors, depression, and psychotropic medication as potential attenuating factors in the association between sleep and CHR symptoms, and the directionality of associations over time.In this longitudinal study, individuals at CHR for developing psychosis reported ample sleep disturbances over the study period.

Participants reported relative stress during SIP compared to their own previous stress level

Gulbas and colleagues identify a series of factors relevant to both NSSI and suicide that correspond to features we found among the SWYEPT participants, including family fragmentation, conflict, physical and sexual abuse, and domestic violence. The relationships among these factors are complex and are found cross-culturally, though they tend to be more severe with suicide than with NSSI . Logistic regressions examined differences by mid-SIP PA in likelihood of increased stress at mid-SIP and use of each stress management strategy at mid-SIP, adjusting for age, race, education, income, employment, and past-month alcohol use . Chisquare tests examined the association between mid-SIP stress and PA pattern, and between midSIP stress and mid-SIP use of stress management strategies. P-values < 0.05 were considered statistically significant. Managing stress while complying with the uniquely disruptive COVID-19 SIP restrictions may require a variety of stress management strategies. In a sample of adults mostly residing in Northern California, we examined relationships between stress, physical activity, and other stress management strategies during SIP. Participants who were physically active during SIP were less likely to feel increased stress during SIP and were more likely to report use of physically active stress management strategies. Additionally, physically active participants were less likely to report managing stress by sleeping more or eating more. Participants who reported managing stress using outdoor PA, indoor PA, and reading were less likely to feel increased stress during SIP. Those who managed stress by watching TV/movies, sleeping more, and eating more were more likely to feel increased stress. The association between greater PA and lower stress was consistent with hypotheses and with the extensive literature on the positive effects of PA on stress reduction in non-COVID contexts .

Engaging in PA may have significantly reduced stress incurred by COVID-19. Alternatively, participants with fewer stressors may have found it easier to be physically active. In this study, vertical farming systems cost participants meeting PA guidelines were older, more likely to be White and to drink alcohol, had greater educational attainment and higher household income, and were less likely to be employed . These participants may represent a subset of adults with greater resources and fewer demands on their time during SIP, leading to lower stress and increased ability to engage in PA. Nonetheless, the association between PA and stress remained statistically significant after accounting for age, race, past-month alcohol use, education, household income, and employment status. Engaging in PA may have contributed to stress management, even for participants who already had many advantages. This study suggests that the well-documented positive effects of PA on stress management persist even in the highly unusual circumstances of SIP. Active and less active participants also differed in the stress management strategies they employed. A majority of active participants reported that they used PA—especially outdoor PA — to manage stress. Active participants were four times more likely than less active participants to report managing stress using outdoor PA than inactive participants. Active participants were also more likely to report use of indoor PA, yoga, meditation, or prayer, gardening, and reading. Most of these activities involve a physical activity component. Additionally, physically active participants were less likely to cope with stress by eating more or sleeping more. Disruptions in diet are common during stressful times. Similar to the present study, a study of Belgian university students found students with more stress and less physical activity were at greatest risk for increased snacking during a stressful final exam period . COVID-19 SIP is a more widespread, disruptive, long-term stressful circumstance than a final exam period, yet similar results were found. Sleep disruptions have also been linked to stress during COVID-19 self-isolation . Indeed, in the current study, participants who managed stress by eating more, sleeping more, or watching TV/movies were more likely to report increased stress. Eating, sleeping, and watching TV/movies may have been used to manage stress by participants who were already experiencing a great deal of stress.

These activities require less energy to initiate than the more active strategies and may have felt more manageable. Concurrently, these less active strategies may have been less effective than strategies involving physical activity. Participants who coped with stress using PA or reading were less likely to report increased stress. Making PA—especially outdoor PA— more accessible during COVID-19 SIP may help ease stress. Recent changes in SIP policies in the San Francisco Bay Area have opened up local parks and activity areas . Overall level of PA during SIP, rather than change in PA, was associated with stress. Specifically, participants who became active or became less active during SIP did not significantly differ in likelihood of increased stress from those who were active throughout SIP. On the other hand, those who were less active both before and during SIP were more likely to experience increased stress. Low physical activity may be associated with other risk factors for stress, such as long work hours, that persisted during SIP. The study period was short and may not have been sufficient to show long-term associations. Other research has found that improvement in stress management over time is associated with increases in PA . As people adjust to COVID-19 and its associated restrictions, stress management and PA may improve. Although PA remained fairly consistent over the one-month study period , the proportion of participants reporting increased stress during SIP decreased substantially . Engaging in PA throughout SIP may further decrease stress. Stress management is crucial during COVID-19, as stress can increase susceptibility to viral infection . This study was observational and precludes causal conclusions about the role of PA in reducing stress. Analyses adjusted for numerous potential confounding factors; however, analyses were correlational. Generalizability of results is limited due to the non-representative sample. Most participants resided in Northern California, where the weather is generally conducive to outdoor PA year-round. The sample was predominantly middle-aged, female, White or Asian, and highly educated, with high household incomes. Although PA has near universal benefits, disparities in the ability to engage in PA during COVID-19 are likely. To our knowledge, such disparities have not yet been studied. Future research is needed to examine the role of PA in COVID-19 stress management among more socio-demographically and geographically diverse populations.

Participants were surveyed at the beginning of SIP and one month into SIP. Longer follow-up may show different patterns of results. The measure of stress used in this study was designed to capture changes in stress specific to SIP in a single item, with high face validity. Validated measures of stress, while less specific to SIP, should be used in future longitudinal research to expand upon the present study. Copper is toxic to life at levels that vary depending on the organism. Humans are mandated to not exceed 1–2 mg/L copper in their drinking water , while some freshwater animals and plants experience acute toxic effects at concentrations as low as 10 µg/L . Because the human food chain begins with plants, it is critical to understand how plants tolerate heavy metals including copper, which is frequently concentrated in soils as a result of pesticide application, sewage sludge deposition, mining, smeltering, and industrial activities. This issue is also at the crux of applying phytoremediation approaches, which use green plants to decontaminate or contain polluted soils and sediments and to purify waste waters and landfill leachates . Metal-tolerant plants inhibit incorporation of excess metal into photosynthetic tissue by restricting transport across the root endodermis and by storage in the root cortex . In contrast, hyperaccumulating plants extract metals from soils and concentrate excess amounts in harvestable parts such as leaves. Copper detoxification seems to be linked to mechanisms that bind Cu to molecular thiol groups. Cysteine-rich peptides, such as phytochelatins which transport copper to the shoot, increase in response to high cellular levels of Cu , and Cu-S binding occurs in roots and leaves of Larrea tridentata. However,vertical farming towers racks an unidentified copper species, concentrated in electron-dense granules on cell walls and some vacuole membranes, appears to be the main morphological form of copper sequestered in Oryza sativa , Cannabis sativa , Allium sativum , and Astragalus sinicus . Plants take in and exclude elements largely at the soil-root interface within the rhizosphere, i.e. the volume of soil influenced by roots, mycorrhizal fungi, and bacterial communities . Deciphering processes that control the bio-availability of metals in the field is difficult because the rhizosphere is compositionally and structurally complex. Here we report on using synchrotron-based microanalytical and imaging tools to resolve processes by which metal tolerant plants defend themselves against excess cationic copper. We have mapped the distribution of copper in self-standing thin sections of unperturbed soils using micro-Xray fluorescence and identified structural forms of copper at points-of-interest using micro-extended X-ray absorption fine structure spectroscopy and X-ray diffraction . Because only a few small areas could be analyzed in reasonable times with microanalyses, the uniqueness of the microanalytical results was tested by recording the bulk EXAFS spectrum from a sample representing the entire rhizosphere and by simulating this spectrum by linear combination of copper species spectra from POIs. We investigated copper speciation in rhizospheres of Phragmites australisand Iris pseudoacorus, two widespread wetland species with high tolerances to heavy metals .P. australisis frequently used to treat waste waters because it can store heavy metals as weakly soluble or insoluble forms. Its roots can be enriched in Cu 5-60 times relative to leaves, with large differences among ecotypes and between field-grown versus hydroponically grown plants . To take into account natural complexity, including any influence of bacteria, fungi, or climate variation, our experiment was conducted outdoors, rather than in a greenhouse on seedlings using ex-solum pots or hydroponic growth methods.

The soil was from the Pierrelaye plain, a 1200 ha truck-farming area about 30 km northwest of Paris, France. From 1899 to 1999, regular irrigation of the Pierrelaye plain with untreated sewage water from Paris caused contamination with heavy metals, mainly Zn, Pb, and Cu . Such pollution is pervasive worldwide because increasing populations and associated economic growth are diminishing available freshwater, thus leading to increased irrigation of farmlands with waste waters.In the initial soil, copper occurs in two morphological forms . One form decorates coarse organic particles that have some recognizable structures from reticular tissue , and the other occurs inThe rhizospheres were oxidizing as indicated by the presence of iron oxyhydroxide , absence of sulfide minerals, and the fact that P. australis and I. pseudoacorus are typical wetlands plants with aerenchyma that facilitate oxygen flflow from leaves to roots . Thermodynamic calculations using compositions of soil solutions collected below the rhizosphere indicate that Cu+ and Cu2+ species should have been dominant . These points along with the occurrences of nanocrystalline Cu0 in plant cortical cells and as stringer morphologies outside the roots together suggest that copper was reduced biotically. Ecosystem ecology of the rhizosphere indicates synergistic or multiple reactions by three types of organisms: plants, endomycorrhizal fungi, and bacteria. Normally, organisms maintain copper homeostasis through cation binding to bioactive molecules such as proteins and peptides. When bound, the Cu2+/Cu1+ redox couple has elevated half-cell potentials that facilitate reactions in the electron-transport chain. Even though average healthy cell environments are sufficiently reducing , there are enough binding sites to maintain copper in its two oxidized states. Copper is also important in controlling cell-damaging free radicals produced at the end of the electron-transport chain, for example in the superoxide dismutase enzyme Cu-Zn-SOD, which accelerates the disproportionation of superoxide to O2 and hydrogen peroxide. However, unbound copper ions can catalyze the decomposition of hydrogen peroxide to water and more free radical species. To combat toxic copper and free radicals, many organisms overproduce enzymes such as catalase, chelates such as glutathione, and antioxidants . Mineralization could also be a defense against toxic copper, but reports of Cu+ and Cu2+ biominerals are rare; only copper sulfide in yeast and copper oxalate in lichens and fungi are known. Atacamite 3Cl in worms does not appear to result from a biochemical defense. Biomineralization of copper metal may have occurred by a mechanism analogous to processes for metallic nanoparticle synthesis that exploit ligand properties of organic molecules. In these processes, organic molecules are used as templates to control the shape and size of metallic nanoparticles formed by adding strong reductants to bound cations. For copper nanoparticles and nanowires, a milder reductantsascorbic acidshas been used.

Spinal inhibition of FAAH also potentiated SIA via a CB1- dependent mechanism

These lipids are synthesized and released on-demand, activate cannabinoid CB1 receptors with high affinity, and are metabolized in vivo by distinct hydrolytic pathways. Within the CNS, 2-AG is present in quantities 170e1000 times greater those of anandamide . 2-AG serves as a full agonist at CB1 and CB2 , whereas anandamide is less efficacious . 2-AG is preferentially degraded by monoacylglycerol lipase , whereas anandamide is preferentially hydrolyzed by fatty-acid amide hydrolase . Mutant mice lacking the FAAH gene are impaired in their ability to degrade anandamide and display reduced pain sensation that is reversed by the CB1 antagonist rimonabant . Immunocytochemical studies have revealed a heterogeneous distribution of FAAH throughout the brain and moderate staining of FAAH-positive cells in the superficial dorsal horn of the spinal cord . In brain, the regional distribution of MGL partially overlaps with that of the CB1 receptor . Inhibition of MGL in the midbrain PAG increases 2-AG accumulation and enhances SIA in a CB1-dependent manner , supporting a physiological role for 2-AG in neural signaling. Thus, the anatomical distributions of FAAH and MGL are consistent with a role for these enzymes in terminating the activity of endocannabinoids. The present studies were conducted to investigate the contribution of endocannabinoids acting at spinal CB1 receptors to nonopioid SIA. We used high-performance liquid chromatography/mass spectrometry to examine the contribution of 2-AG and anandamide at the spinal level to SIA. We examined the time-course of changes in endocannabinoid levels in the lumbar spinal cord of control rats and rats subjected to foot shock stress. Moreover, we used selective inhibitors of MGL and FAAH to further elucidate the roles of these endocannabinoids in SIA. We additionally evaluated effects of intrathecal administration of arachidonoylserotonin ,vertical cannabis farm a FAAH inhibitor that is inactive at phospholipase A2 and CB1 receptors, on SIA. We hypothesized that intrathecal administration of these inhibitors would potentiate nonopioid SIA via a CB1 mechanism. Preliminary results have been reported .

To determine whether endocannabinoid release participates in non-opioid SIA, we analyzed 2-AG and anandamide levels in the lumbar spinal cords of rats killed before or at various times after foot shock using LC/MS. Example chromatograms show the coelution of endogenous 2-AG and anandamide with synthetic standards . ANOVA revealed time-dependent changes in 2-AG levels derived from lumbar spinal cord extracts of shocked rats relative to non-shocked rats [F ¼ 21.106, P < 0.001] . Post hoc analyses confirmed that 2-AG levels were significantly increased in rats killed at 2 min post shock and decreased at subsequent time points . By contrast, anandamide levels in the lumbar spinal cords of rats subjected to foot shock did not differ reliably from controls. Planned comparisons failed to reveal a significant elevation in anandamide levels at either 2 min or 5 min post shock . Foot shock also failed to alter either 2-AG or anandamide levels in tissues derived from the occipital cortex , an area enriched in cannabinoid CB1 receptors but not implicated in SIA. A significant correlation was observed between SIA and post-shock levels of 2-AG , but not anandamide , in the lumbar spinal cord over the same interval .We have previously demonstrated that nonopioid SIA is mediated by mobilization of two endocannabinoids, 2-AG and anandamide, in the midbrain PAG . The present results extend those findings and suggest that endocannabinoid actions at spinal CB1 receptors modulate SIA. Foot shock stress induced time-dependent changes in thefindings is that inhibition of MGL or FAAH prevented the deactivation of spinal 2-AG and anandamide, respectively, magnifying nonopioid SIA. Thus, our findings suggest that spinal endocannabinoids regulate, but do not mediate, nonopioid SIA. It is possible that the placement of chronic indwelling intrathecal catheters elevated anandamide levels in the behavioral studies, and that this change in endocannabinoid tone was augmented by the FAAH inhibitors. However, changes in endocannabinoid tone due to catheter placement appear unlikely because intrathecal rimonabant administration did not alter tail-flick latencies in the presence or absence of foot shock. Moreover, Fos immunocytochemical studies suggest that spinal compression induced by catheter placement does not induce appreciable neuronal activation .

Electrophysiological studies have provided evidence both for and against the existence of a tonic endocannabinoid tone at the spinal level. However, in our study, intrathecal rimonabant administration failed to suppress either basal nociceptive thresholds or nonopioid SIA. Hyperalgesic effects of spinal rimonabant may reflect sensitivity of supraspinally-organized relative to spinally organized pain behaviors in measuring lowered nociceptive thresholds. It is also possible that exposure to uncontrolled stress mobilizes endocannabinoids in behavioral studies , thereby resulting in a CB1-dependent apparent hyperalgesia. Although high doses of anandamide activate transient receptor potential vanilloid 1 , several observations suggest that this mechanism is not responsible for the enhancements of SIA observed here. First, systemic administration of the TRPV1 antagonist capsazepine, at a dose that completely blocks capsaicin-induced antinociception, does not alter nonopioid SIA in our paradigm . Second, CB1 and TRPV1 show minimal colocalization at the axonal level in the spinal cord, with CB1 localized predominantly to laminae I and II interneurons and TRPV1 localized primarily to nociceptive primary afferent terminals . Third, anandamide activation of CB1 and TRPV1 typically induces opposing effects with distinct time courses . However, we observed only enhancementsdnot attenuationdof SIA following intrathecal administration of FAAH inhibitors and no change in SIA following antagonist treatment. Fourth, rimonabant completely blocked the potentiation of SIA induced by intrathecal administration of inhibitors of MGL or FAAH, at doses that were insuffificient to reverse SIA when administered alone. Together, these observations suggest that anandamide acts through CB1 rather than TRPV1 at the spinal level to modulate SIA. The foot shock-induced increases in 2-AG levels were smaller than those observed previously in the dorsal midbrain of the same subjects .

This observation is consistent with the inability of spinally administered rimonabant to block nonopioid SIA. By contrast, a tenfold lower dose of rimonabant produced a robust suppression of SIA when micro-injected into the dorsolateral PAG. Our results collectively suggest that supraspinal sites of action play a pivotal role in endocannabinoid-mediated SIA. Our data suggest that spinal inhibition of MGL prolongs the duration of action of 2-AG, thereby enhancing endocannabinoid tone at spinal CB1 receptors to magnify SIA. This enhancement occurred in the absence of reliable changes in spinal anandamide levels. Thus, the antinociceptive effects of MGL inhibitors are not dependent upon concurrent elevations in anandamide that were induced in the midbrain PAG following exposure to the same stressor.However, foot shock did not reliably increase spinal anandamide levels. Our failure to observe enhancements in anandamide accumulation may reflect lower absolute levels of anandamide relative to 2-AG and consequently higher variability in these measurements. Our data are consistent with the ability of FAAH inhibitors to selectively enhance accumulation of anandamide, but not 2-AG . FAAH may regulate both the distance endocannabinoids diffuse to engage CB1 receptors and the duration of endocannabinoid actions to increase accumulation of tonically released anandamide. It is also possible that the extent of on-demand anandamide synthesis may be underestimated in the present work due to the rapid metabolism of this lipid by FAAH. Mapping the distribution of MGL, FAAH and CB1 in the spinal cord could further elucidate the anatomical and functional relationship between cells that degrade 2-AG and those expressing CB1. Cannabinoid receptor agonists such as D9 -tetrahydrocannabinol have limited therapeutic applications at present, mainly because of their undesirable psychoactive effects. However, pharmacological agents that protect endocannabinoids such as 2-AG and anandamide from inactivation may lead to a more circumscribed spectrum of physiological responses than those produced by direct cannabinoid agonists. Ideally,vertical farming growing racks this strategy would enhance endocannabinoid activity only at sites with on-going biosynthesis and release, thereby averting undesirable side effects. The possible use of drugs that inhibit endocannabinoid hydrolysis to treat pain in humans has thus propagated both hope and concern . FAAH is widely distributed throughout the body and implicated in the metabolism of a variety of anandamide analogues . Our data demonstrate that local enhancements of endocannabinoid actions at the spinal level are sufficient to potentiate SIA. Additional experiments will be necessary to determine whether inhibitors of endocannabinoid degradation may find therapeutic applications in the treatment of pain and stress-related disorders.Self-cutting can be understood clinically as a symptomatic behavior, on the one hand, and as a bodily practice embedded in a cultural imaginary and identity on the other. It is present in a variety of ways including the 1993 memoir of Susanna Kaysen “Girl, Interrupted” , the 1995 acknowledgment by Princess Diana that she identified herself as a “cutter,” and the 2011 video “F**kin’ Perfect” by the pop music performer Pink. The Internet has become a massively popular resource for cutters to share information , and one study identified more than 400 message boards about cutting generated via five search engines . Youths may identify with “Emo” or “Goth” culture which lionize depression and cultivate self-cutting as a cultural practice . Popular concern about perceived dangers of self-cutting has at times been heightened to the point that one cultural historian suggested that “Cutting has become a new moral panic about the dangers confronting today’s youth” .

Anthropology has not been disposed toward addressing cutting as a problematic cultural or clinical phenomenon given the disciplinary propensity to understand body mutilation and modification in terms of rituals and cultural practices. This is perhaps because ritual meaning is not so dependent on distinguishing whether harm is inflicted by others or by oneself or on differentiating cultural practice from psychopathology.One other anthropological observation has been provided by Lester, who notes that current explanations of self-harm can be grouped into four categories: communicating emotional pain, emotional or physiological self-regulation, interpersonal strategy, and cultural trend. She notes that these categories share the idea that self-harm manifests individual pathology or dysfunction, with the cultural assumption of the individual as a rational actor. In contrast, an anthropological perspective emphasizes the “cultural actor who embodies and responds to cultural systems of meaning to internal psychological or physiological states” . Emphasizing the powerful symbolic significance and long cross-cultural record of self-harm and blood shedding as ritual and even therapeutic practices, she suggests that contemporary cutting may be seen as privatized and decontextualized social rituals affecting transformation parallel to collective initiation rituals that operate in a cycle of self-harm and repair, especially in the case of adolescent girls struggling with the aftermath of sexual abuse and/or with contradictory gender messages . Sociocultural characteristics of a typical “self-cutter” emerged in the 1960s as Euro-American, attractive, intelligent, and possibly sexually adventurous teenage girls, that Brickman claimed was partially taken up in medical discourse in a manner that “pathologizes the female body, relying on the notion of ‘femininity as a disease’” . Gilman took exception to assumptions of pathology with the provocative claim that “self-cutting is a reasonable response to an irrational world” . From a clinical vantage point, self-cutting is often viewed as a type of injury or harm to the self. The historical backdrop to this development can be traced to Menninger’s attention to self-mutilation as distinguished from suicidality. The distinction between “delicate” and “coarse” self-cutting was made by Pao , with Weissman focusing on wrist-cutting syndrome and Pattison and Kahan proposing the existence of a deliberate self-harm syndrome. Favazza provided cases of extreme and highly unusual forms of self-mutilation in excruciating detail, with an attempt to classify types based on severity. With the provisional emergence of nonsuicidal self-injury disorder criteria in the fifth version of the Statistical and Diagnostic Manual of Mental Disorders DSM-V ,1 the distinction between self-harm as within a normative or pathological range remains equivocal. This is illustrative of the manner in which conceptualizations of self-cutting continue to be embedded in a complex cultural history of changes in the incidence, popular awareness, and social conditions in which such phenomena occur.While it is possible to find clinical, psychometric, survey, and historical approaches to the phenomenon of self-cutting, we lack an ethnographic account with a substantive locus in the interactions of individuals, grounded in the specificity of bodily experience and the immediacy of struggle in the face of existential precarity . In this article, we take a step toward such an account with a discussion situated at the intersection of two anthropological concerns. First is the ethnographic understanding of experiential specificity through anthropological adaptation of phenomenological method .

Preliminary examinations of these policies suggest that many of them may also address drug use

Notably, our post hoc sensitivity analyses of race/ethnicity only utilized the years 1989-2013, indicating that the discrepancies between our findings and Cil’s probably are not due to the different time frames. Strengths and Limitations This is the first study to examine all policies related to alcohol use in pregnancy simultaneously across all 50 states using a time frame long enough to capture the period before any laws were enacted . Furthermore, for most of the time frame the data include the entire population of singleton births born in the United States and for the years 1972-1984 include a 50% sample, which makes questions regarding inference and generalizability essentially irrelevant. Another major advantage of these data over, for example, survey data regarding alcohol use during pregnancy, is that biases due to self-report are not present here. Finally, our results were robust across various model specifications, further strengthening our conclusions. The main limitation of this study is that Vital Statistics birth certificate data are not collected for research purposes; therefore, we cannot adjust for maternal-level alcohol or tobacco use. Although maternal alcohol and tobacco use have been recorded on birth certificates since 1989, these data have been shown to be invalid We adjusted for state-level alcohol and tobacco consumption instead. Another limitation is that race has been measured inconsistently on birth certificate data over time. Only in 1989 did states begin to document ethnicity as well as race, although this was phased in over the 1990s. Our primary analyses did not account for ethnicity, e.g. White Hispanic and White Non-Hispanic women are in a single group. Such an approach is reasonable because birth outcomes are similar between White nonHispanic and Hispanic births,vegetables vertical farming both of which differ from Black birth outcomes.

Measurement of key outcome variables – particularly gestational age – changed over time as well. We applied approaches developed later to correct for implausible gestational age values to earlier years of Vital Statistics to improve consistency. Also, for these analyses, we focused specifically on policies targeting alcohol use during pregnancy.Future research is needed to explore whether the findings generalize to policies targeting drug use during pregnancy. Framing drug policy in language of supply- vs. demand-side programs reflects the increasing diffusion of economic thinking from the business place to other domains of American life. The idea is that some interventions involve supply while others involve demand , and that there is a drug control budget pie that can be sliced along these lines. But there are some drawbacks to this framing. As Murphy has documented, the notion that we can simply shift monies from one portion of a federal drug budget to another is naïve; there is no single allocating authority, and the “budget” is a mythical post-hoc construction assembled from a variety of conflicting sources and entities. And supply and demand factors are clearly interdependent and endogenous. The alternative idea of a “public health” framing of drug policy is refreshing, but in practice it tends to devolve to the “demand reduction” frame. Instead, we will try to keep the focus on strategies, rather than tactics; goals rather than programs. Our framework for doing so is sketched here and is developed in greater detail elsewhere Our perspective will not appeal to everyone. In particular, our framework is irrelevant for people who hold that certain moral beliefs trump any consideration of consequences.

There are two such deontological positions. One is the libertarian belief that ingesting psychoactive substances is our birthright. At the other extreme is legal moralism – the belief that drug intoxication is intrinsically immoral. Based on an extensive analysis of drug policy rhetoric , we conclude that few people are strict libertarians or pure legal moralists with respect to drugs. Most people who argue that either drug use or drug prohibition is immoral usually cite empirical arguments in support of their positions. At this point, we bid pure libertarians and legal moralists adieu. For the consequentialists, we suggest three broad goals: Prevalence reduction , quantity reduction , and micro harm reduction . Practices and concepts most readily identified with prevalence reduction include abstinence, prevention, deterrence, and incapacitation. Practices and concepts most readily identified with harm reduction include safe-use and safe-sex educational materials, needle exchanges, and the free distribution of condoms to students . Traditional discussions of prevention, treatment, deterrence, and incapacitation focus almost exclusively on the first category, with the implicit assumption that the best way to eliminate harm is to eliminate prevalence — turning users into non-users. This is logically correct, but not very realistic. Prevalence reduction may be employed in the hope of reducing drug-related harms, but because it directly targets use, any influence on harm is indirect. Harm reduction directly targets harms; any influence on use is indirect. From an analytic standpoint, all three strategies contribute to a broader goal, macro harm reduction . For tangible harms, Macro Harm = Micro Harm x Prevalence x Quantity, summed across types of harm .

The strategies are potentially in tension, particularly if efforts to reduce prevalence increase harm , if efforts to reduce quantity discourage abstinence , or if efforts to reduce average harm encourage the prevalence or quantity . Thus, any drug policy intervention should be evaluated with respect to all all three criteria – prevalence reduction, quantity reduction, and harm reduction – because all three contribute to the reduction of total drug harm. Note that our use of “harm reduction” is unusual here, in that we are not referring to specific “harm reduction” programs like needle exchange, but rather to a goal that is served – well or poorly – by any intervention. For that reason, we will discuss harm reduction in the context of traditional interventions like policing, prevention, and treatment. Why is psychoactive drug use a crime? And is there a sensible answer that also explains why tobacco and alcohol are on one side of the legal threshold, while marijuana, cocaine, the opiates, and the psychedelics are on the other? One way of tackling this question is historical, and there are a number of outstanding histories of roles played by race, class, and economic interests in the evolution of drug, tobacco, and alcohol control . Another approach is philosophical. If we were starting a society from scratch, which substances, if any, would we prohibit? The traditional first cut at this question uses John Stuart Mill’s harm principle: “That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant.” MacCoun, Reuter, and Schelling listed nearly 50 different categories of drug-related harm, falling into three clusters: Health, social and economic functioning, safety and public order, and criminal justice. Many are quantifiable, at least in principle , but others are not . The authors attempted to categorized these harms with respect to two questions: Who is the primary bearer of the harm? And, what is the primary source of the harm? These harms to others meet the Mills criterion, but that hardly nails down the case for prohibition. MacCoun, Reuter, and Schelling argued that for over half of the harm categories, the primary source of the harm was either the illegal status of the drug, or the enforcement of that law, at least under the current prohibition regime. . The notion that prohibition and its enforcement are partially responsible for drug harms is perhaps best illustrated by examining the relationship between an offender’s illicit drug use and his or her involvement in other crimes. A considerable literature on this relationship suggests the following conclusions . Drug use can promote other crimes; criminality can promote drug use; and/or both can be promoted by environmental, situational, dispositional,vertical farming production and/or biological “third variables.” All three pathways have empirical support in at least some settings and populations. But these causal influences are probabilistic, not deterministic. Most drug users are not otherwise involved in serious crime. Finally, the drug-crime link varies across individuals, over time within an individual’s development, across situations, and possibly over time periods . Like many things in life that are bounded at zero, the frequency distribution of drug consumption has a positively skewed log-normal shape . If one plots the proportion of all users as a function of quantity consumed , most users pile up on the low side of the quantity distribution, but the plot will have a long narrow right tail representing a small proportion of user who use very large quantities. As a result, the harmful consequences of substance use are not uniform, but are disproportionately concentrated among the heaviest users.

Everingham and Rydell used these features to explain why cocaine-related harms remained high even as total prevalence was dropping; one sees a similar logic today in methamphetamine statistics. There is a sophisticated treatment of these distributional features and their implications for the targeting of interventions , but far less little discussion in the illicit drug literature . Another distributional consideration is how drug use and drug harms are distributed across geographic, class, and ethnic lines. African Americans use illicit drugs at a rate similar to European Americans, but they bear a disproportionate share of the law enforcement risk and market related violence. This is partly due to the fact that poorer neighborhoods lack the social capital needed to resist open-air drug markets. But it also reflects the deleterious effects of our mandatory minimum sentencing policies, discussed below. Finally, drug problems are distributed over time. Musto argues that drug epidemics are dampened by a generational learning process in which new cohorts observe the harmful results of a drug on older users. Building on this idea, Caulkins and his colleagues have developed sophisticated models of how interventions may provide less or greater leverage at different points in a drug epidemic. They argue, for example, that supply reduction measures will be more effective in the early stages of an epidemic but relatively ineffective in a large, mature, established market. Conversely, prevention and treatment may have limited effectiveness early in an epidemic – prevention because its effects are so lagged, and treatment because it interferes with generational learning about drug harms. This work is necessarily fairly speculative at present; we lack enough long term time-series data to permit serious testing of such hypotheses. But their analyses are valuable in encouraging another dimension of more strategic thinking. Scholars rely heavily on counts of arrest rates and victimization reports to track trends in most categories of illicit behavior. In contrast, the literature on illicit drug use relies much more heavily on surveys of self-reported drug use, and to a lesser extent drug related medical events. This probably reflects the sheer prevalence of drug use in the population , as well as the more diffuse linkage between the criminal act and any harms to innocent victims. The 2007 Monitoring the Future annual school survey has the longest running consistently measured time series for substance use in the US. Figure 1 shows trends in past-month prevalence for selected substances for 12th graders . Several patterns are apparent. First, alcohol remains the most common psychoactive substance among high school seniors. Second, in the most recent year , monthly alcohol and cigarette use each reached their lowest recorded levels. Third, past-month marijuana use reached its peak around 1979, hit a low in 1992, and has stabilized near 20 percent for the past decade. Finally, recent use of cocaine, MDMA, or methamphetamine is fairly rare among high school seniors. MDMA use seems to have peaked at 3.6 percent in the year 2000, cocaine use has remained fairly stable at around 2 percent, and methamphetamine has dropped from 1.9 percent in 2000 to 0.9 percent in 2006. Table 1 shows past-month prevalence of various substances by age category, from the household-based 2006 National Survey on Drug Use and Health . For each substance, young adults were the most frequent recent users, with one exception – heroin. New heroin initiation is rare; the US heroin problem mostly involves an aging cohort of addicts who initiated use in their youth. The overall methamphetamine rates obscure the fact that prevalence remains considerably higher in the Western US than in the Midwest and South and the Northeast , and is higher among whites and Latinos than among African Americans .

The relevance of particular models for other disorders should also receive attention

Most animal working memory paradigms utilize delays while human paradigms manipulate the number of information items , limiting their translational validity. Some rodent span tasks do exist , and models of BD should also be tested in these paradigms as well as tasks that investigate other cognitive domains, e.g., executive functioning in the attentional set-shifting task . Another shortcoming of the presented work is the focus of studies on the manic phase of BD. For instance, the reduced DAT functioning models lack any depression-relevant behaviors, limiting their utility for BD individuals who experience both mania and depression. Future studies should conduct longitudinal and cross-sectional studies of BD patients across the mood state spectrum in translationally relevant paradigms to establish patient behavioral profiles as targets for animal models. For instance, although both depressed and manic patients exhibit impaired IGT performance, depressed subjects seem to be more sensitive to punishment compared to reward hypersensitivity exhibited by manic subjects . Such data could provide clues as to what behavior should be exhibited by the model animal and what mechanisms should be explored for each phase of the disorder. From a biological perspective, there is renewed evidence that suggests that impaired cholinergic mechanisms play a key role during bipolar depression , which may be restored during euthymic phases and switch to aberrant dopaminergic signaling during mania . Thus ideally, future research should focus on the cyclical nature of BD when attempting to model the disease and discover novel treatments. It is possible that the molecular clock machinery in BD is susceptible to internal and external stimuli such as stress or daylight lengths ,indoor agriculture vertical farming which subsequently can change the homeostasis of the DA / acetylcholine systems, resulting in a shift to either mania or depression.

As reviewed elsewhere , it has been reported that: 1) mania onset appears to be linked to long periods of daylight, while depression onset is associated with short daylight durations; 2) concomitant neurotransmitter switching occurs whereby elevated dopamine is apparent during long day light-length and corticotropin releasing factor in short daylight-length; and 3) that increased CRF expression elevates ACh levels in a manner that may predispose to depression. Accordingly, experiments that manipulate the durations of daylight periods in animal models prior to tests using cross-species paradigms relevant to both mania and depression could help to elucidate mechanisms related to the disparate phases of BD and the switching between states that is so characteristic of the disorder. DAT KD/KO mice have also been described as models for ADHD , and indeed hyperactivity, inattention, and impaired decision-making are present in both ADHD and BD populations. Moreover, reduced striatal DAT levels have also been observed in ADHD subjects . However, ADHD subjects tested to-date display a different BPM profile from BD subjects and DAT KD mice . Furthermore, DAT KD mice are also hypersensitive to psychostimulants similar to stimulant-induced mania. In contrast, the hyperactivity of DAT KO mice is attenuated by such stimulants consistent with ADHD treatments. Because ADHD and BD presents with similar clinical symptoms and comorbidity is high , elucidating differences between these disorders in order to avoid potentially harmful treatment of wrongly diagnosed ADHD with stimulants will be important future research. The overlap of affected behaviors in BD and ADHD for which reduced DAT functioning recreates, may also relate more to the cognitive/behavioral domains affected as opposed to a specific disease state. The National Institute of Mental Health has proposed and promoted the Research Domain Criteria initiative, which bypasses diagnostic categories and focuses instead on classifying psychopathology based on dimensions of functioning in patients .

Such an approach may prove to show that reduced DAT activity affects these domains of functioning irrespective of disease state, and that patients with BD simply have lower DAT expression during a manic episode. The reduced DAT models we present here therefore address key aspects of increased dopaminergic transmission. Hence, examining DAT levels and the behavior of patients across diagnostic categories using these translationally relevant paradigms could be a useful RDoC approach. Another avenue of research could include investigating the highly prevalent cannabis usage among BD patients . Although substance abuse in general is high in psychiatric populations, there is evidence that suggests BD patients may be using cannabis as a means to self-medicate . Importantly, interactions exist between the cannabinoid and DA systems, hence future research should investigate their relationship with BD. While the effects of cannabidiol have been investigated on amphetamine-induced oxidative stress , no attempts to measure a behavioral profile were made and, as described above, selective DAT inhibition may better recreate what occurs in BD patients. More research is therefore required in clinical and model behavioral studies. Finally, other neurotransmitter systems relevant to BD besides the catecholamines should be studied, e.g., the involvement of glutamate and NMDA receptors.Advances in scientific discoveries for mental health disorders have been vast in recent years and are increasing in pace. Unfortunately, the research to market success rate for drug development is merely 1 in 1000 compounds. To develop a BD-targeted treatment, it will be important to use a model animal for BD based on etiological and predictive validity. The application of translational approaches will likely reduce the risk of conducting expensive late-phase clinical trials for novel drugs that prove ineffective . A broad range of behaviors across species should be assessed, and the labeling of stimulant-induced hyperactivity as ‘mania-like’ should be reconsidered. Utilizing models with altered mechanisms that potentially underlie BD and exposing them to a set of translational tests used to characterize deficits in patients fills a crucial gap between preclinical and clinical research.

This work would then provide preliminary evidence for targeted mechanistic /cognitive studies in patients during illness states to link preclinical evidence to the clinical state . This translational approach will benefit both the understanding of mechanisms implicated in BD pathology and the opportunity to test novel therapeutics. Importantly, despite having been neglected for BD research in the past, this approach includes assessing neurocognitive deficits in patients. Combining relevant environmental manipulations with genetic susceptibility models that contribute to switching may improve our knowledge about the oscillatory nature of BD. Although few models of the switching between states in BD exist, some have been proposed and would benefit from translational assessment, as has previously been thoroughly reviewed . Ultimately, using translational approaches as described in this review may eventually yield novel therapies that specifically target the biological circuitry of BD, thus improving the lives of patients. Obtaining informed consent for human participation in research is a regulatory obligation; however, it is also an ethical priority. Every effort should be made to ensure all aspects of study participation are clearly presented and easy to understand . Institutional review boards are the administrative bodies that enforce the federal regulations for protection of human research subjects with a goal of evaluating the probability and magnitude of potential risks of harm against potential benefits of knowledge gained, recommending strategies for managing risks and generally promoting rigorous and ethical research. Although IRBs are responsible for reviewing and approving the information presented about a study,indoor vertical farming companies the actual process of obtaining informed consent from a potential research participant can vary both within and across studies. As a result, the informed consent delivery processes can range from simply letting the prospective participants review the informed consent information alone to actively engaging with individuals in a face-to-face discussion to increase the likelihood that study information is understood by the prospective volunteer to make a decision about participation . Whereas informed consent is a cornerstone of behavioral and biomedical research ethics , there are no requirements to assess whether the informed consent process used in a particular study is effective in facilitating authentic informed consent. Research on this topic, however, suggests as many as 50% of participants do not understand some or all components of informed consent across surgical and clinical trials . There are likely several reasons for this: the inclusion of research-oriented language with which many participants may not be familiar ; the inclusion of complex, IRB-required language addressing liability ; and the extensive length of consent documents, which often makes them difficult to navigate and comprehend . Individual IRBs typically require informed consent documents be at or below a certain grade reading level, often ranging from 5th grade to 10th grade reading levels ; however, many informed consent documents have reading levels well above those standards . For example, one study examined readability of 124 HIV clinical trial consent forms and found median readability of 9.2 grade level, with confidentiality sections at a median of 12.6 grade level and overall document length almost 30 pages on average . Ensuring participants are satisfied with the informed consent process is particularly important at research facilities that have long-term relationships with participants , and/or among vulnerable patient populations that may have particularly strong concerns about privacy protections and data confidentiality. For example, HIV is an acquired and potentially transmissible disease that is stigmatized in both social and medical settings .

Although the body of research on people living with HIV is vast, there is only one study that specifically evaluated the thoughts of PWH on the informed consent process . This study, conducted 25 years ago, assessed the informed consent process for an HIV drug trial and found approximately 56% of participants reported understanding all information on informed consent forms and 21% thought too much information was included. Unfortunately, the authors did not evaluate participant thoughts about what information was the most or least helpful, which is necessary for improving the informed consent process. The lack of informed consent research among PWH highlights the need for greater understanding of how PWH perceive the informed consent process. This research on the consent process is essential if we are to improve the likelihood that consent is truly “informed,” and, moreover, ensure individuals have the information needed to provide their voluntary agreement to participate in research. There are validated assessment tools used to evaluate informed consent comprehension , Miller et al. ). These tools have been useful particularly with populations with diminished cognitive capacity . While knowledge assessment tools can provide some indication that relevant information has been conveyed and, potentially understood, more research is needed to identify what influences meaningful and authentic informed consent. The purpose of this study was to evaluate the informed consent process used at a large U.S. HIV research center among participants with and without HIV. An overarching goal was to better understand participant perceptions to inform improvements of the informed consent content and process. Specifically, we assessed aspects of the informed consent content that participants found most and least informative, and explored whether this differed by HIV serostatus. We also examined the efficacy of our informed consent process by assessing whether participants thought the information presented was consistent with what they experienced during the study.Participants enrolled in ongoing studies at the HIV Neurobehavioral Research Program completed a questionnaire regarding their experience with the informed consent process. The questionnaire was administered to participants immediately after enrolling as well as after completing their study visit. All participants were taken through the informed consent process on the same day as their study visit. All participants who came to the HNRP between May 5, 2017 and July 11, 2017 were invited to complete the consent questionnaire. One participant declined due to the paperwork burden of the primary study in which they were enrolled. The participants who chose to enroll in the present study were enrolled in a wide variety of both cross-sectional and longitudinal studies ranging in complexity and length of assessment that involved completion of self-report measures , neuropsychological testing, and participation in clinical trials designed to test the efficacy of new drugs for improving cognition and physical outcomes among PWH.A recruiter from the HNRP was responsible for presenting the informed consent information for this study to our existing HNRP participants. On average, recruitment staff have been with the HNRP for 9.8 years and have an established professional relationship with study participants. Our recruitment staff adhere to a standardized informed consent process as follows.

We observed non-significant decreases in daily marijuana use across time points

Contrary to expert predictions , binge alcohol drinking and recreational drug use declined from pre-pandemic data collection to early-pandemic follow-up.Though we cannot identify specific factors that inform these trends, findings from research with PLHIV suggest a potential association between perceived susceptibility to SARS-CoV-2 acquisition and symptoms and changes in recreational drug use behaviors . The pandemic ushered in widespread fear, anxiety, and uncertainty as no one knew how COVID-19 might increase the risk for SARS-CoV-2 acquisition among PLHIV . Despite these fears and perceptions, investigating the psychosocial and behavioral strengths that influenced the decline of alcohol and recreational drug use behaviors warrants further scrutiny. Alternatively, alcohol and recreational drug use may have decreased because of structural COVID-19 mitigation practices implemented nationwide. Local mandates limited social gatherings and shuttered businesses , practices that may have stymied alcohol and recreational drug use, especially among persons who consume substances primarily within social and sexual contexts . PLHIV’s adherence levels to COVID-19 mitigation efforts, like social distancing, may better inform changes in substance use behaviors. When observing between-group differences, depressive symptoms were consistently associated with each substance use trajectory. These findings are consistent with prior studies . Though researchers have previously linked social support and loneliness to alcohol and substance use behaviors among PLHIV , our analyses yielded inconsistent findings. In bivariate tests, loneliness levels early in the pandemic were positively associated with binge drinking and substance use trajectories. In multi-variable models,vertical farms companies functional social support was inversely associated with membership in the ‘any recreational drug use’ trajectory group, reinforcing the health-promotive qualities of social relationships in PLHIV’s lives .

Our non-significant findings may be attributed to the limited variability of psychosocial factors within our sample. Overall, PLHIV in the MWCCS reported low levels of loneliness and high levels of functional support when the pandemic began. Over 75% reported having two to three people that they can rely on for support. Consistent with prior studies, male sex was positively linked to substance use trajectories, which may reflect sex-related disparities in psychosocial and environmental risk factors . Although we examined group-based trajectories in alcohol, marijuana, and recreational drug use, we did not analyze other consequences of COVID-19, such as changes in livelihood, and the impact of the pandemic on loved ones. Therefore, we cannot pinpoint specific stressors or strengths that shaped our participants’ substance use trajectories. Our results may not apply to other samples of men and women living with HIV. MACS and WIHS participants are part of a longstanding HIV survivor cohort, who are predominantly middle-aged, although the mean age of our sample aligns with the statistic that over half of PLHIV in the United States are above the age of 50 . In prior studies, pre-COVID substance use trajectories in PLHIV have exhibited age-related differences on specific substances consumed ; therefore, cohorts of younger aged individuals may report different COVID-19 substance use patterns. Our sample also represents people who are largely linked to health care, and accustomed to the requirements involved in cohort study participation . Furthermore, the right skew distribution of some of our study’s psychosocial variables may inform inconsistent or different findings observed with more psychosocially diverse samples. Our sample’s low rates of substance use, particularly with recreational drugs, limited our capacity to explore more nuanced ways of looking at group-based trajectories, such as the frequency and specific types of substances that participants used. Furthermore, we were unable to distinguish participants who use marijuana for medical versus recreational purposes; many study sites are in states where HIV seropositivity is an eligible indicator for medical cannabis use. Finally, our study is susceptible to the inherent challenges of conducting survey research, including limitations linked to self-report, subjectivity, social desirability, and the proclivity of participants to under-report behaviors on sensitive topics, like mental health and substance use.

Future studies should investigate the ongoing impact of COVID-19 on the psychosocial well-being of PLHIV. This study examined substance use trajectories only in the short-term. Future MWCCS analyses may improve the precision in identifying substance use trajectories among PLHIV, and to assess the long-term psychosocial consequences of the pandemic beyond the three time points included in our analysis. Because depressive symptoms, loneliness, and social support variables were collected at the start of COVID-19 pandemic efforts , future analyses should examine how changes in cooccurring psychosocial profiles alter PLHIV’s substance use trajectories. Additional studies may benefit from alternative conceptualizations of substance use to inform pandemic-related trajectories . Furthermore, the Centers for Disease Control and Prevention reported that overdose deaths have accelerated during the COVID-19 pandemic . Though our study focuses on substance use frequency, future studies should evaluate how pandemic stressors have impacted the amount of alcohol and drugs PLHIV consumed at times of use. Finally, researchers should implement mixed-methods approaches to better identify health-promotive factors that interrupt problematic trajectories in substance use. The COVID-19 pandemic has emphasized the importance of monitoring and evaluating alcohol and substance use in PLHIV. Since the start of the pandemic, health providers have adapted their service provision to maximize patients’ retention in healthcare . As mitigation recommendations continue to be updated, consistent evaluations of PLHIV’s substance use trajectories amidst the COVID-19 pandemic may better equip health systems to effectively respond to complex and co-occurring challenges as new public health emergencies arise. Furthermore, researchers predicted that mitigation strategies elevated peoples’ risks for adverse conditions, like depression and loneliness. These conditions already disproportionately effect PLHIV and are linked to decreased engagement in HIV care. Yet, we found that binge alcohol consumption and recreational drug use decreased at the beginning of the pandemic among substance-using PLHIV.

Furthermore, depressive symptoms distinguished PLHIV with substance use trajectories from those with non-use trajectories. Ongoing surveillance must continue in order to substantiate our findings, elucidate the COVID-19 pandemic’s complex and co-occurring contributions to psychosocial conditions, and inform the supportive factors that interrupt negative behavioral health trajectories experienced by PLHIV. The exact pathophysiology and aetiology of the severe mental disorders schizophrenia and bipolar disorder remain unknown. They have been hypothesized to be part of the same psychosis continuum, since they in addition to overlapping symptoms share some genetic underpinnings , cognitive impairments and brain anatomical abnormalities . Whereas pre- and perinatal complications have been established as risk factors for schizophrenia , the evidence for an association between pre- and perinatal adversities and the risk for bipolar disorder is less consistent. Some authors have argued that in genetically susceptible individuals, the absence of pre- and perinatal complications favours the development of bipolar disorder whereas their presence favours the development of schizophrenia . Nevertheless, some epidemiological studies suggest that pre- and perinatal factors may increase the risk for bipolar disorder and affective psychosis. Hultman et al. have demonstrated an association between specific obstetric complications and affective psychosis; an increasing birth weight was found to linearly associate with decreased risk for affective disorders ; recently, increased risk for bipolar disorder in children born pre-term [odds ratio 2.7, 95% confidence interval 1.6–4.5] was reported. Accordingly, neurodevelopmental disturbances and/or pre- and perinatal trauma may also be of importance for the development of bipolar disorder. Magnetic resonance imaging studies have demonstrated the existence of neuroanatomical abnormalities in bipolar disorder , the most consistent finding being enlarged ventricular volumes . The results for other brain structures differ among studies, possibly due to low sample sizes and confounding factors,vertical greenhouse farming such as lithium medication. Recent meta-analyses report that lithium-naive patients with bipolar disorder have smaller hippocampal and amygdala volumes as compared with patients who receive lithium medication and with healthy controls . There is also some evidence supporting more pronounced brain abnormalities in patients with psychotic bipolar disorder than in patients with non-psychotic disorder . The mechanisms underlying the structural brain abnormalities observed in bipolar disorder are not completely known. Post-mortem studies have demonstrated reduced neural somal size and neuron numbers in the amygdala, and reduced number of parvalbumin- and somatostatin-expressing interneurons and reduced pyramidal cell size in the hippocampus of bipolar disorder patients. These neuronal changes may have a developmental origin, given the fact that animal models have demonstrated long-term neuronal loss in the amygdala and reduced pyramidal cell size in the hippocampus following pre- and perinatal hypoxia. Moreover, smaller hippocampal volumes and larger ventricular volumes have been demonstrated in schizophrenia patients with a history of pre- and perinatal hypoxia . Enlarged ventricles have also been observed in schizophrenia patients who suffered prolonged birth . In schizophrenia, smaller hippocampal volumes have been reported following OCs in general , and severe OCs have been reported to interact with the hypoxia regulated GRM3 gene to affect hippocampal volume . Smaller hippocampal volume and reduced grey matter have been observed in otherwise healthy subjects born very preterm . Smaller hippocampal volume has also been reported in healthy adolescents following perinatal asphyxia , and long-term reductions of the grey matter in the amygdala have been observed in children with neonatal hypoxia– ischaemia . Hence, it is plausible that pre- and perinatal complications affect brain structure abnormalities associated with bipolar disorder.

The aim of the current study was to investigate the relationship between pre- and perinatal trauma and brain structure volumes in patients with bipolar disorder. Specifically, we studied the relationship between two measures of pre/perinatal trauma [i.e. an established composite severe OCs score comprising complications occurring throughout the whole preand perinatal period , and a diagnosis of perinatal asphyxia, a distinct complication shown by animal models to cause long-term brain abnormalities ], and three brain volumes either previously reported to be associated with OCs in schizophrenia or associated with bipolar disorder . We hypothesized that perinatal asphyxia and severe OCs would be associated with smaller hippocampus and amygdala volumes, and with larger ventricular volumes, in patients with bipolar disorder, and that the associations would be stronger in those with psychotic than in those with non-psychotic disorder. This latter prediction is based on the findings of more pronounced brain abnormalities and cognitive impairments in the psychotic than non-psychotic form of bipolar disorder. In addition, psychotic bipolar disorder may be more similar to schizophrenia, a disorder in which patients with OCs show more pronounced brain abnormalities than patients without such complications . To our knowledge, this is the first study to explore the association between hypoxia-related OCs, in particular perinatal asphyxia, and neuroanatomy in bipolar disorder.We found that perinatal asphyxia and severe OCs were related to smaller amygdala and hippocampal volume in patients with bipolar disorder. Whereas patients with psychotic bipolar disorder showed reduced amygdala volume following perinatal asphyxia, patients with non-psychotic bipolar disorder showed reduced hippocampal volume following perinatal asphyxia and severe OCs, after adjustment for multiple comparisons, and controlling for the effects of age, sex, ICV and medication use. To the best of our knowledge, this is the first study to investigate associations between hypoxia-related pre- and perinatal complications and brain MRI morphometry in bipolar disorder. Our findings indicate that perinatal hypoxic brain trauma is of importance for the adult brain morphology in bipolar disorder, and may thus be a neurodevelopmental factor of importance to disease development. This concurs to some extent with large scale epidemiological studies that report lower birth weight , specific OCs and premature birth to increase the risk for bipolar disorder or affective psychosis. Indeed, we have in a subject sample overlapping with the current study previously demonstrated lower birth weight to correlate with smaller brain cortical surface area in patients across the psychosis spectrum as well as healthy controls . The results from the current study expand on this by demonstrating distinct associations between specific hypoxia-related pre- and perinatal complications and sub-cortical structures, known to be vulnerable to perinatal hypoxia ,in patients with bipolar disorder. As such, the current findings to some extent support the speculation by Nosarti et al. that there may exist a neurodevelopmental subtype of bipolar disorder. Within the whole group of bipolar disorder patients, we found perinatal asphyxia to be significantly associated with smaller left amygdala volume. The amygdala is involved in emotion processing and regulation, disturbances of which are core features of bipolar disorder . Altered amygdala function related to emotion-processing tasks has repeatedly been reported from functional MRI studies in patients with bipolar disorder .

Drug screening was conducted in conjunction with client-centered risk reduction counseling

Women were asked about type of ATS use ; frequency and route of use since their last visit; number of days of use in the past month; and use in the past five days, with specific questions including “today,” “yesterday,” and “two,” “three,” “four,” and “five” days ago. Urine toxicology testing was conducted to qualitatively screen for recent ATS, opiate, and cannabis use. Women were asked to void into prelabeled sterile collection cups in a private lavatory; the specimens were passed through a private window to the on site laboratory for testing. The test included four strips, which yielded positive results for amphetamine and/or methamphetamine if either exceeded 1000 ng/mL; for opiates if morphine in urine exceeded 2000 ng/mL; and for cannabis if the concentration 11-nor-Δ9-tetrahydrocanna binol-9-carboxylic acid exceeded 50 ng/mL. A positive amphetamine or methamphetamine screen was considered indicative of ATS use in the past 48 hours.Overall, results suggest high validity of self-reported ATS use among FSW when compared with urine toxicology screening. In almost all cases where women reported no ATS use in the past two days, negative urinalysis corroborated self-report. The majority of participants with positive urine tests reported ATS use during the same detection period. However, only 81% of participants who reported ATS use had positive urine tests. One possible explanation of the low positive predictive value is that women in the study actually used ATS but in such a small quantity that the urine tests failed to detect it. Since ATS is illegal and its purity is unknown,vertical outdoor farming some women could have used the less pure forms of ATS, which may not have been potent enough to be detected by urine testing. The NACD has reported that, among 151 pill samples of ATS tested, 25% of the samples had purities below 10%. Al though the proportion of women self-reporting ATS use was slightly higher than the urine test results , these rates are not inconsistent and are near perfect.

Other studies have documented higher self reported use compared with urinalysis results, leading to recommendations that multiple methods be used to assess drug use exposures. The high concordance between self-report and test results are suggestive of high internal validity of self-report of ATS in our study population. Some differences were seen in the performance of self report compared with urinalysis when examined by age, HIV status, and sex-work setting. Most notably, there was lower precision between positive self-report and urinalysis tests among younger women and among women working in entertainment or service settings. The lower PPV may relate to lower prevalence of ATS use among these subgroups. We have previously shown that women working in entertainment and service sec tors in Cambodia are less likely to use ATS than women working in brothels. Prevalence of ATS among younger women is slightly lower but not significantly so. Importantly, specificity was high overall, with subgroup analyses showing valid self-report of no ATS use in our sample. This is import ant for further studies of ATS exposure in this population, for public health surveillance, and potentially for intervention and implementation of drug prevention programs. The high validity of self-report may be associated with several factors. The women in this study were not reluctant to answer the survey questions or to take the test, as indicated by the high participation rate. This could be due, at least in part, to the fact that the participants were recruited by a known and trusted community-based agent, our collaborating partner , and were comfortable with the staff involved in data collection. Moreover, the women in the study knew that providing truthful responses about their drug use would not result in negative consequences or punitive action. This study had several limitations. Due to the small sample size and non-systematic sampling, our estimates lack precision and results may not be representative of all young women engaged in sex work in Phnom Penh or Cambodia. This is particularly true for the stratified analyses, where cell sizes were very small in some cases and prevalence of ATS was lower.

Poor recall may have contributed to some discordance, including the relatively low PPV found overall. Approximately one in five women incorrectly reported recent ATS use. Recall of ATS use could be affected by recent ATS use and its side effects, including sleep deprivation and confusion. It is unknown if this would result in over- or underestimating of self-report. Since women were all informed about the testing as part of the informed consent process and ongoing study-procedure education, some women may have over reported use for the periods about which they were queried. Moreover, urine toxicology tests are not perfectly accurate. Although the urinalysis test is widely accepted as a “gold standard” for substance use validation, exclusive reliance on such results does not necessarily improve validity because of problems with false negatives. Many studies comparing self-report, urine, and hair testing results suggest that hair analyses provide higher rates of recent drug use than can be detected by either urine tests or self reports. Various authors suggest multi-modal testing for the most accurate results. Despite these limitations, our results suggest a high level of concordance between self-reported ATS use and urine toxicology results in this group of women. Results indicate high prevalence of ATS use among FSW, who are also at elevated risk of HIV and other sexually transmit ted infections. There are few, if any, community-based options for ATS users in Cambodia. The finding that self report, especially specificity, is valid among young FSW is important because of potential utility in surveillance as well as drug prevention and intervention programs in this population. There is a significant need for evidence-based prevention and drug treatment resources in Cambodia, including potentially cognitive behavioral therapy, contingency management, and possibly new pharmacotherapies to reduce ATS use. The forthright self-reporting of drug use by women participating in this study shows that, in a safe and nonpunitive setting, disclosure of accurate drug use is possible. These findings, which are consistent with other studies showing high validity of self-reported drug use, may also be relevant to other vulnerable populations in Cambodia reported to have high rates of ATS use and who may also be in need of interventions, including children, young adults, and men who have sex with men.

Indeed, with escalating manufacture and use of ATS throughout Southeast and East Asia, and in consideration of the need for expanded surveillance of drug use to more accurately inform public health and policy responses, self reported use may be a reliable data collection method. For surveillance, research, and health-care settings, it is import ant that providers and others address drug-related health issues in a nondiscriminatory manner and without punitive consequences in order to accurately assess and effectively address health and safety issues in high-risk populations.Brain imaging research reveals that developing human brain systems undergo continuous biological and functional change, at multiple scales, well into adulthood. This long arc of development reflects a unique genome apparently selected to equip us with the capacity to develop a complex model of the world and revise it continuously through intelligent adaptation to the changing environment. It follows that variability in human phenotypes will emerge during development in part because of variation in individual genomes, but also, disproportionately,vertical aeroponic farming due to variation in physical and social environments and gene– environment interactions. Previous studies of the course of mental and substance use disorders, and also of academic and workforce disengagement, for example, have highlighted the pivotal role of ado lescence in the trajectories toward these functional outcomes. In studies of this important stage of development, multiple factors have been associated with adverse outcomes in youth, including genetic variation, attributes of the environment, individual experiences, and behavioral traits of the youth themselves. However, when these outcomes emerge gradually through dynamic interaction between gene and environment, retrospective reconstruction of the causal events is extremely challenging and can be misleading. Prospective, longitudinal studies of individuals developing in different environments have the potential to reveal the dynamics that lead to diverging trajectories; but only recently has it been possible to access non-invasively some of the factors known to play important roles in the outcomes, such as genomic and epigenetic variation, biologi cal development of the brain, and individual experiences and exposures. Now, however, with new noninvasive technologies in hand, and considering the gravity of the educational, health, and mental health problems we face as a society, human developmental scientists are obliged to create the data resources from which evidence-based models of emerging behavioral phenotypes can be constructed. Effective interventions to prevent or mitigate negative outcomes depend critically on such large-scale data and new models.Although it is not yet possible to predict risk for adverse behavioral and mental health outcomes in individuals accurately, the existing evidence implicates a number of plausible modifiers of risk, and leaves little doubt that environmental factors contributing to the disparities will be many and varied. Similarly, although there is strong evidence that genetic factors play a role, it is likely that the outcomes will be influenced by multiple genetic variants, most of which are likely to be of individually modest effect. In other words the genetic architecture of a risk phenotype is likely to be very complex. Finally, it is increasingly evident that the impact of environmental and experiential factors on specific developmental outcomes will differ as a function of the genetic variables, and may also differ as a function of culture and family structure. With this study, and others like it, we enter a new era in human behavioral neuroscience, which has been labeled population neuroscience. Since some outcomes of interest are relatively uncommon in the general population, it is necessary to observe a large number of developing youth in order to identify those factors and interactions that best distinguish these outcomes.

For these reasons, large scale, high-dimensional, longitudinal data resources are urgently needed by the scientific community, and these cannot be acquired without broad collaboration and careful harmonization of key data elements. The ABCD Study represents a major commitment by a consortium of researchers to create such a broad collaboration, focusing on the behaviorally critical, and biologically complex, period of development surrounding adolescence.The ABCD Study shares with the research community, as soon as is practicable, the entire evolving data resource, as a means of accelerating progress in the field. Unprocessed brain imaging data are shared almost continuously through the National Institute of Mental Health Data Archive, as part of the “fast-track” sharing arm of the study. In addition, updated, cumulative sets of curated data, along with workflows used to produce the derived data, are shared in annual versioned releases. ABCD is therefore a significant contributor to the data resources that support the new era of “big data” in biomedical research, and it complements most other studies by increasing the depth of phenotyping and adding important prospective developmental data. To ensure that the accumulated data are of acceptable quality, the ABCD consortium must establish robust data review procedures and closely monitor the quality of all types of data as well as ensure that the protocols provide both stable construct validity over time and necessary modifications as the cohort matures. Furthermore, the consortium attempts to adapt to emerging improvements in behavioral phenotyping methods that could enhance the study, and identify other relevant data streams, for example, of environmental factors, that can be integrated temporally and geographically with ongoing assessments of the participants. Thus, new data types and new computational workflows are expected to appear in future data releases. The ABCD Study, more than most similar studies, will push the envelope with an aggressive time line of data sharing and reduced barriers to access designed primarily to protect the privacy of participants and maintain records of the use of the data. Moreover, the sharing of associated workflows and specific algorithms will provide additional value to the larger scientific community beyond that of the data themselves. This new model will inevitably create challenges for conventional practices of scientists, reviewers, and editors, as multiple attempts to answer similar scientific questions with the same data will be under way almost simultaneously by independent investigators and groups.