Monthly Archives: March 2023

Acceptability ratings were collected within the app and within the post treatment survey

The BCI crime lab data, while imperfect, is to the best of our knowledge the best available indicator of the content of the illicit drug market in Ohio. However, the primary limitation in our analysis, and in research investigating illicit drug markets in general, is that the data does not provide a random sample of available and consumed illicit drugs and the demand for illicit drugs is not directly observable. In addition, the crime lab data only indicates whether a sample tests positive for particular drugs, and many samples test positive for more than one drug. There is no information about the amount of each detected substance or the proportion of the sample that is made up of each substance. One may be concerned that the estimates are biased by the BCI labs changing what they test for based on why people are dying. If this was the case, then the detection of a drug may be confounded by the timing of testing for that particular drug. For example, regarding the finding that the detection of carfentanil is positively correlated with overdose deaths, one may worry that the drug could have been around for some time, just not tested for, and the labs only started testing for carfentanil after people started dying from carfentanil. However, the specific substances that the BCI labs tested for did not change over the time period, so the positive tests should be interpreted as finding the actual substance, not finding it conditional on the timing of the lab testing for it. I.e., the only fentanyl analog found in 2015 was acetyl fentanyl, but this was not because the many other fentanyl analogs were not being tested for at that time. Additionally, even though we control for county and month fixed effects, county-specific linear time trends, and several relevant time-varying county-level variables,planting table there is the possibility of bias from county-specific unobserved omitted variables that are time-varying in a non-linear way, such as a sudden county-specific increase in demand for opioids.

Another potential problem in interpreting the estimates is reverse-causality: changes in the composition of the drug market as measured by changes in crime lab positive tests may be affected by contemporaneous overdose deaths after including all of the controls. It does not seem plausible that an increase in overdose deaths would cause an increase in the adulteration of heroin with deadlier synthetic opioids. However, it could be the case, for example, that law enforcement may be purposefully targeting certain types of drug crimes and not others based on the number of overdose deaths. That is law enforcement intensity rather than changes in the underlying illicit drug market could be the underlying unobserved variable driving the positive correlation between fentanyl/carfentanil/fentanyl analog crime lab tests and overdose deaths. Another possibility is that courts may have become more lenient towards specific types of drug crimes, e.g. via an expansion of drug courts that reduced the need for labs to test specific drugs. Given that we cannot directly observe county-specific changes in law enforcement priorities or the criminal justice system, these unobservables may be an underlying cause of the estimated relationship between BCI lab tests and overdose deaths. Other unobservables, such as a sudden increase in the population of opioid users is another potential reason for the positive correlation between synthetic opioid tests and overdose deaths, so we are hesitant to conclude that the changing illicit opioid market as measured by crime lab tests is directly causing an increase in overdose deaths. Having said that, as mentioned above, the strong correlation of overdose deaths with the BCI crime lab tests finding synthetic opioids make the lab tests a valuable resource for an early warning system regardless of the true causal relationship. The rise in fentanyl adulterated or substituted heroin represents a significant shift in the risk environment for people who inject drugs. 

The best evidence we have had of the increasing danger of fentanyl and other synthetic opioids is the increasing number of deaths related to these drugs. We fill some of the gap in our understanding of synthetic opioids contributing to a worsening risk environment for heroin users by providing new evidence that the illicit drug market has been changing rapidly in Ohio, which is likely a major factor in the recent surge in overdose deaths. The strong statistical relationship between overdose deaths and crime lab tests for synthetic opioids provides evidence for making this information publicly available as quickly as possible on an ongoing basis. Surveillance of the drug supply, both on an intimate as well as mass scale, may provide a way to alter the risk environment. As heroin has become increasingly contaminated with synthetic opioids, people change their behavior to reduce risk. For example, there is evidence that if PWID are provided with fentanyl test strips, they take a variety of measures to decrease their risk of overdose. Providing timely information about the contents of seized drugs, at the local city or county level, a relatively low-cost intervention, so that people can respond to location-specific changes in the risk of encountering evolving synthetic opioids is another promising way to help PWID take steps to reduce their chances of dying. Furthermore, the data can be used by harm reduction services, first responders, and law enforcement to more quickly respond to emerging spikes in overdose deaths. In addition, the data can alert us to new and evolving trends in the illicit drug market, such as changes in poly drug use and the recent substantial shift in the crime lab data towards methamphetamines. Our findings are critically important from a policy perspective. Ohio’s experience with carfentanil in particular, with a surge in deaths and then a quick disappearance, should cause alarm for other states. Current public detection systems are not up to the task to respond in a timely manner. Rhode Island, for example, set up a website to provide the public with information on where overdoses are occurring to help provide the public with more information about the opioid crisis.

However, the data is only updated biannually with a significant time lag and little information that could help change behavior or efficiently redirect harm reduction services to counter a sudden appearance of a new deadlier opioid. Providing crime lab data quickly could provide a real opportunity to have a rapid response to a rapidly changing risk environment. Misuse of substances is common, can be serious and costly to society,cannabis indoor grow system and often goes untreated due to barriers to accessing care. Globally, 3.5 million people die from alcohol and illicit drug use each year. The disease burden of alcohol and illicit drug addiction is the highest in the United States. Over 20 million Americans had a substance use disorder in 2018, 73% had an alcohol use disorder, 40% had an illicit drug use disorder, and 13% had both alcohol and illicit drug use disorders. Approximately half of Americans with an SUD had a co-occurring mental illness. Treatment of depression and anxiety, the most common psychiatric comorbidities among patients with SUDs, may reduce craving and substance use and enhance overall outcomes. In 2018, less than 1 in 5 individuals with a SUD received addiction treatment. Alcohol and illicit drug misuse and addiction cost the United States over US $440 billion annually in lost workplace productivity, health care expenses, and crime-related costs. Potential effects on individuals include an array of physical and mental health problems, overdose, trauma, and violence. Web-based interventions and digital health apps may reduce or eliminate common, significant barriers to traditional SUD treatment. Preliminary evidence suggests that digital SUD interventions affect substance use behavior and have the potential to reduce the population burden of SUDs. To date, most digital SUD interventions have been delivered on a web platform, rather than via mobile apps. The widespread use of smartphones makes app-based intervention delivery a viable and scalable medium. In 2019, about 8 out of 10 White, Black, and Latinx adults owned a smartphone. Although lower-income adults were less likely to own a smartphone than higher-income adults, they were more likely to rely on smartphones for internet access. In a 2015 survey, 58% of mobile phone owners reported downloading a health app. Texting is the most widely and frequently used app on a smartphone, with 97% of Americans texting at least once a day. Automated conversational agents can deliver a coach-like or sponsor-like experience and yet do not require human implementation assistance for in-the-moment treatment delivery. As recent meta-analytic work suggests, conversational text-based agents may increase engagement and enjoyment in digitized mental health care, whereas most general mental health care apps face difficulty sustaining engagement with high dropout.

Conversational agents can provide real-time support to address substance use urges, unlike traditional in-person frameworks of weekly visits. The scale potential of conversational agents is unconstrained, immediate, and available to users in an instant. Being nonhuman based also reduces perceived stigma. A study found that people were significantly more likely to disclose personal information to artificial intelligence when they believed it was computer- rather than human-monitored. Users can develop a strong therapeutic alliance in the absence of face-to-face contact, even with a nonhuman app. Digital environments can promote honest disclosure due to greater ease of processing thoughts and reduced risk of embarrassment. Finally, although conversational agents can present in different modalities, including text, verbal, and animation, preliminary research on modality for psycho education delivery specifically found that text-based presentation resulted in higher program adherence than verbal presentation. Evidence for conversational agent interventions for addressing mental health problems is growing quickly and appears promising with regard to acceptability and efficacy. Developed as a mental health digital app, Woebot is a text-based conversational agent available to check in with users whenever they have smartphone access. Using conversational tones, Woebot is designed to encourage mood tracking and to deliver general psycho education as well as tailored empathy, cognitive behavioral therapy –based behavior change tools, and behavioral pattern insight. Among a sample of adults randomly assigned to Woebot or an information only control group, Woebot users had statistically and clinically significant reductions in depressive symptoms after 2 weeks of use, whereas those in the control group did not. Engagement with the app was high. However, the efficacy of conversational agents for treating SUDs remains unknown. Woebot’s app-based platform and user-centered design philosophy make it a promising modality for SUD treatment delivery; it offers immediate, evidence-based tailored support in the peak moment of craving. An informal poll of Woebot users indicated that 63% had interest in content addressing SUDs; 22% of surveyed users reported having 5 or more alcoholic drinks in a row within a couple of hours, and 5% endorsed using nonprescription drugs. Although the efficacy of automated conversational agent digital therapeutics for SUDs is still untested, such products are commercially available, and few consumers are aware that the products lack evidence. This study aims to adapt the original Woebot for the treatment of SUDs , and test the feasibility, acceptability, and preliminary efficacy in a single-group pre-/post treatment design. In a single-group design, we examined within-subject changes in self-reported substance use behavior, cravings, confidence to resist urges to use substances, mood symptoms , and pain from pre- to post treatment. Intervention engagement data were collected from the Woebot app during the 8-week treatment period.The study procedures were approved by the Institutional Review Board of Stanford Medicine. Participants were recruited via the Woebot app, social media , Craigslist, and Stanford staff and student wellness listservs. In addition, study flyers were posted in the San Francisco Bay Area, and email invitations were sent to participants from previous studies. Recruitment materials included the URL on a web page describing the study for people with substance use concerns. Informed consent was required to screen for eligibility. Those who screened as eligible were asked to provide informed consent for participation in the study. Inclusion criteria were all genders, aged 18 years to 65 years, residing in the United States, screening positive on the 4-item Cut down, Annoyed, Guilty, Eye opener-Adapted to Include Drugs, owning a smartphone for accessing Woebot, available for the 8-week study, willing to provide an email address, and English literate.

Co-use of multiple substances may influence the relationship between alcohol use and neural integrity

Frontal and temporal cortical thinning may predict increased vulnerability to development of adolescent depression. In the NCANDA sample of 692 adolescents without a history of depression, the 101 youth who transitioned into depression were found at study baseline to have thinner cortices in the superior frontal cortex, precentral and postcentral regions, and superior temporal cortex, beyond effects attributable to age and sex.Childhood trauma and post-traumatic stress symptoms have been shown to confer increased risk for adolescent and adulthood AUD, mental illness, and physical health problems.Youth with trauma exposure showed thinner frontal cortices, and those with chronic post-traumatic stress disorder had smaller orbital frontal cortices and less superior posterior cortical and cerebellar gray matter volume.These observations indicate that trauma may be associated with structural brain aberrations. NCANDA has also examined the relationship between childhood trauma and subsequent adolescent alcohol use.In a sample of 392 NCANDA participants, adverse childhood event history was linked to greater self-reported executive dysfunction spanning four annual follow-ups. Greater childhood trauma also was linked to less connectivity in sensorimotor and cognitive control networks at baseline. This reduced connectivity explained the relationship between executive dyscontrol and subsequent increased frequency of adolescent binge drinking.Sleep patterns change substantially during adolescence and emerging adulthood.Lack of sleep, going to sleep relatively late,garden racks wholesale and large weekend-weekday sleep differences all are risk factors for alcohol use in adolescents and young adults.Similarly, in the NCANDA sample, sleep difficulties in adolescence predicted later substance use problems.

The reverse has also been seen, with acute and chronic alcohol intake altering sleep structure and electroencephalography patterns in older adolescents and adults.NCANDA will continue to longitudinally examine whether these changes remain evident into adulthood and how alcohol use influences sleep neurobiology.For example, during a spatial working memory task, adolescents with co-occurring AUD and cannabis use disorder showed less inferior frontal and temporal neural activation but a greater medial frontal response compared to adolescents with AUD alone.Couse of alcohol with cannabis also may adversely influence executive functioning.Given the high rates of co-occurring alcohol and other substance use during adolescence,future well-powered studies will benefit from detailed analyses of various combinations of substances of abuse on neural and neurocognitive outcomes. In adults with AUD, improvements in attention and concentration, reaction time, and memory are generally seen after 2 to 8 weeks of abstinence;however, executive functioning, processing speed, visuospatial, and verbal skills appear more resistant to recovery,and spatial processing deficits may persist for years.Younger adults tend to recover more quickly and completely than older adults.As mentioned previously, preliminary evidence suggested that adolescent heavy drinkers showed greater response to alcohol cues,more emotional reactivity and poorer distress tolerance,and poorer visuospatial performance compared with adults. These effects remitted after a month of abstinence, indicating that some deficits are linked to alcohol intake and may be transitory. However, executive dysfunction81 and negative mood states did not remit within 4 weeks of abstinence, suggesting that these differences may have predated the onset of heavy drinking or may take more time to recover. As reported by Infante et al., cortical gray matter volume decreases were greater in proximity to reported drinking episodes in a doseresponse manner, suggesting a causal effect and raising the possibility that normal growth trajectories may recover with alcohol abstinence.

However, other studies have suggested that impaired visuospatial functioning following adolescent AUD persisted even after reduced levels of use.Longitudinal studies with large, diverse, representative samples of youth and a range of detailed measures are key to helping understand the behaviors that convey disadvantages to adolescent and young adult development and outcomes. To date, a handful of large-scale multisite studies are being conducted to gain insight into the consequences of adolescents transitioning into and out of substance use. These include the largest longterm study of brain development in the United States, the Adolescent Brain Cognitive Development Study, which is currently underway; NCANDA; the IMAGEN study in Europe; the Pediatric Imaging, Neurocognition, and Genetics study; and the Lifespan Human Connectome Project study. NCANDA has already been able to confirm impressions from prior smaller studies that adolescent heavy drinking appears linked to accelerated gray matter decline,disrupted functional connectivity,and reduced cognitive performance. Determining the degree to which these effects remit or persist with alcohol abstinence or reduced use will be a key next step in this line of work.The exact pathophysiology and aetiology of the severe mental disorders schizophrenia and bipolar disorder remain unknown. They have been hypothesized to be part of the same psychosis continuum, since they in addition to overlapping symptoms share some genetic underpinnings , cognitive impairments and brain anatomical abnormalities. Whereas pre- and perinatal complications have been established as risk factors for schizophrenia , the evidence for an association between pre- and perinatal adversities and the risk for bipolar disorder is less consistent. Some authors have argued that in genetically susceptible individuals, the absence of pre- and perinatal complications favours the development of bipolar disorder whereas their presence favours the development of schizophrenia. Nevertheless, some epidemiological studies suggest that pre- and perinatal factors may increase the risk for bipolar disorder and affective psychosis.

Hultman et al. have demonstrated an association between specific obstetric complications and affective psychosis; an increasing birth weight was found to linearly associate with decreased risk for affective disorders ; recently, increased risk for bipolar disorder in children born pre-term [odds ratio 2.7, 95% confidence interval 1.6–4.5] was reported. Accordingly, neurodevelopmental disturbances and/or pre- and perinatal trauma may also be of importance for the development of bipolar disorder. Magnetic resonance imaging studies have demonstrated the existence of neuroanatomical abnormalities in bipolar disorder , the most consistent finding being enlarged ventricular volumes. The results for other brain structures differ among studies,hydroponic racks possibly due to low sample sizes and confounding factors, such as lithium medication. Recent meta-analyses report that lithium-naive patients with bipolar disorder have smaller hippocampal and amygdala volumes as compared with patients who receive lithium medication and with healthy controls. There is also some evidence supporting more pronounced brain abnormalities in patients with psychotic bipolar disorder than in patients with non-psychotic disorder. The mechanisms underlying the structural brain abnormalities observed in bipolar disorder are not completely known. Post-mortem studies have demonstrated reduced neural somal size and neuron numbers in the amygdala, and reduced number of parvalbumin- and somatostatin-expressing interneurons and reduced pyramidal cell size in the hippocampus of bipolar disorder patients. These neuronal changes may have a developmental origin, given the fact that animal models have demonstrated long-term neuronal loss in the amygdala and reduced pyramidal cell size in the hippocampus following pre- and perinatal hypoxia. Moreover, smaller hippocampal volumes and larger ventricular volumes have been demonstrated in schizophrenia patients with a history of pre- and perinatal hypoxia. Enlarged ventricles have also been observed in schizophrenia patients who suffered prolonged birth. In schizophrenia, smaller hippocampal volumes have been reported following OCs in general , and severe OCs have been reported to interact with the hypoxia regulated GRM3 gene to affect hippocampal volume. Smaller hippocampal volume and reduced grey matter have been observed in otherwise healthy subjects born very preterm. Smaller hippocampal volume has also been reported in healthy adolescents following perinatal asphyxia , and long-term reductions of the grey matter in the amygdala have been observed in children with neonatal hypoxia– ischaemia. 

Hence, it is plausible that pre- and perinatal complications affect brain structure abnormalities associated with bipolar disorder. The aim of the current study was to investigate the relationship between pre- and perinatal trauma and brain structure volumes in patients with bipolar disorder. specifically, we studied the relationship between two measures of pre/perinatal trauma [i.e. an established composite severe OCs score comprising complications occurring throughout the whole preand perinatal period , and a diagnosis of perinatal asphyxia, a distinct complication shown by animal models to cause long-term brain abnormalities ], and three brain volumes either previously reported to be associated with OCs in schizophrenia or associated with bipolar disorder. We hypothesized that perinatal asphyxia and severe OCs would be associated with smaller hippocampus and amygdala volumes, and with larger ventricular volumes, in patients with bipolar disorder, and that the associations would be stronger in those with psychotic than in those with non-psychotic disorder. This latter prediction is based on the findings of more pronounced brain abnormalities and cognitive impairments in the psychotic than non-psychotic form of bipolar disorder. In addition, psychotic bipolar disorder may be more similar to schizophrenia, a disorder in which patients with OCs show more pronounced brain abnormalities than patients without such complications. To our knowledge, this is the first study to explore the association between hypoxia-related OCs, in particular perinatal asphyxia, and neuroanatomy in bipolar disorder.We found that perinatal asphyxia and severe OCs were related to smaller amygdala and hippocampal volume in patients with bipolar disorder. Whereas patients with psychotic bipolar disorder showed reduced amygdala volume following perinatal asphyxia, patients with non-psychotic bipolar disorder showed reduced hippocampal volume following perinatal asphyxia and severe OCs, after adjustment for multiple comparisons, and controlling for the effects of age, sex, ICV and medication use. To the best of our knowledge, this is the first study to investigate associations between hypoxia-related pre- and perinatal complications and brain MRI morphometry in bipolar disorder. Our findings indicate that perinatal hypoxic brain trauma is of importance for the adult brain morphology in bipolar disorder, and may thus be a neuro developmental factor of importance to disease development. This concurs to some extent with large scale epidemiological studies that report lower birth weight , specific OCs and premature birth to increase the risk for bipolar disorder or affective psychosis. Indeed, we have in a subject sample overlapping with the current study previously demonstrated lower birth weight to correlate with smaller brain cortical surface area in patients across the psychosis spectrum as well as healthy controls. The results from the current study expand on this by demonstrating distinct associations between specific hypoxia-related pre- and perinatal complications and sub-cortical structures, known to be vulnerable to perinatal hypoxia ,in patients with bipolar disorder. As such, the current findings to some extent support the speculation by Nosarti et al. that there may exist a neurodevelopmental subtype of bipolar disorder. Within the whole group of bipolar disorder patients, we found perinatal asphyxia to be significantly associated with smaller left amygdala volume. The amygdala is involved in emotion processing and regulation, disturbances of which are core features of bipolar disorder. Altered amygdala function related to emotion-processing tasks has repeatedly been reported from functional MRI studies in patients with bipolar disorder. Emotional dysregulation and impaired stress response, other important features of bipolar disorder , may be caused by disturbances in corticotropin metabolism and dysfunction in the hypothalamic–pituitary–adrenal axis. Interestingly, a recent rat model study demonstrated significant long-term loss, shrinkage of cell soma size, and axonal degeneration of corticotropin-releasing factor-positive neurons in the amygdala following neonatal hypoxia–ischaemia. These changes were associated with increased locomotor activity and exploratory behaviour , behavioural abnormalities that are also observed in patients with bipolar disorder. Moreover, increased anxiety has been reported following perinatal asphyxia and is associated with dopamine-innervated neurocircuitries in the amygdala, among other structures. Dopaminergic pathways are particularly vulnerable to perinatal asphyxia , and are also involved in the pathophysiology of psychotic disorders. Taken together, it seems biologically plausible that perinatal asphyxia is associated with long-term alterations in the structure of the amygdala, as our results suggest. Such alterations may be functionally associated with the distinct behavioural abnormalities observed in bipolar disorder. Instead of confirming our initial hypothesis that the associations between perinatal asphyxia/severe OCs would be stronger in patients with psychotic than non-psychotic bipolar disorder, the results indicate different patterns of associations in psychotic versus non-psychotic bipolar disorder. Within a psychosis continuum, psychotic bipolar disorder would be considered to be closer than non-psychotic bipolar disorder to schizophrenia. In schizophrenia, smaller hippocampal volumes have been associated with preand perinatal trauma. Surprisingly, we found no associations between severe OCs or perinatal asphyxia and smaller hippocampal volume in patients with psychotic bipolar disorder, but we did find such associations in patients with non-psychotic bipolar disorder. The biological validity of this association is, nevertheless, supported by the literature. First, animal models have demonstrated the pyramidal neurons within the hippocampus to be sensitive to prenatal hypoxia. Second, in the human neonate, hippocampal neurocircuitries are reported to be particularly vulnerable to hypoxia , and, third, healthy adolescents who have suffered perinatal asphyxia exhibit reduced hippocampal volumes. 

The data utilized in this study come from early waves of the Add Health study

Through the use of the FAAH inhibitor URB, the action of AEA is prolonged, interfering with the expression of the conditioned gaping response to a context paired with illness, as a model for anticipatory nausea in rats. As URB administration modulates the EC system, it may be a preferred therapeutic over exogenously administered cannabinoids, in the alleviation of conditioned nausea.The concurrent or sequential usage of multiple drugs during adolescence is a critical public health problem, spawning a large literature focusing on whether usage of one substance leads to usage of others. The study of interdependence in adolescent substance use yields insight into potential patterns regarding which drugs are used sequentially or concurrently. As these risk behaviors co-occur and accumulate over time for certain individuals and social groups, there is potential to concentrate risk and negative sequelae among these concurrent users making concurrent users a high risk population that may be in need of prioritized and targeted intervention. In addition, to the extent that use of one substance affects the usage of another among adolescents, accounting for this interdependence in substance use is important as it can minimize the possibility of obtaining spurious relationships and possibly biased model estimates. While some studies indicate that cigarette smoking is a strong predictor of the concurrent or subsequent usage of alcohol and marijuana, other studies find that alcohol use increases the likelihood of cigarette smoking. In other research, a mutually reinforcing relationship was detected between adolescent alcohol use and smoking, and between cigarette smoking and marijuana use during adolescence and young adulthood.

In contrast,plant grow table other research found that previous use of alcohol did not predict the initiation of marijuana use. Existing studies also indicate that adolescent users of marijuana frequently smoke cigarettes, either as a substitute when marijuana is scarce, or as a means of counteracting the sedating effects of marijuana. The complementary usage of tobacco and marijuana in adolescence may contribute to the eventual dependence on nicotine. The complementary usage of these two substances might be the greatest public health consequence from marijuana use in adolescence. There may also be complementarity in the usage of marijuana and alcohol. The observed high correlation of alcohol and marijuana use may be due to a shared genetic risk for drug use. On the other hand, this observed correlation may be due to substituting one substance for another as a means of minimizing marijuana withdrawal symptoms. In one study, daily marijuana users who underwent a period of abstinence drank more often if they had a previous diagnosis of alcohol abuse or dependence. The importance of peers in the transmission of substance use behavior within adolescent friendship networks has given rise to a body of literature which focuses on how social networks can spread substance use behavior. These studies focus on how the co-evolution of adolescent friendship networks and substance use gives rise to peer influence and selection effects within adolescent networks regarding a specific substance use behavior. Peer influence is a type of social influence, and the latter has been theorized from numerous points of view including the Dynamic Social Impact Theory, which states that individuals will become more like those who are socially proximal and as a result their attributes will be correlated. Given the importance of concurrent or sequential usage of cigarettes, alcohol, and marijuana, the purpose of this study is to examine the co-evolution of use of these substances within the dynamic landscape of adolescent friendship networks, which are a primary socialization context for adolescent substance use.

A key methodological and theoretical challenge herein is that the context of peer networks must be taken in consideration when studying adolescents’ interdependent substance use, because interpersonal association via peer influence or friendship selection likely shapes the concurrent or sequential use of substances. Not taking into account such peer network effects can result in biased estimates of the interdependence of substance use behaviors, or likewise, the effects of network influence. Fig 1 displays a hypothetical simple 2-person world in which the “true” model are the solid lines and the dashed lines show possibly spurious intra-personal effects of one substance use on another. This figure is informed by the Dynamic Social Impact Theory, as we posit that person 1 will become more like his or her peer, person 2, because of peer influence via modeling, shared opportunities, social proximity and the consolidation of attitudes and behaviors that may take place in adolescent friendship networks. This social process is captured in pathways from person 1 smoking to person 2 smoking, and in addition from person 1 drinking to person 2 drinking. Note that certain person-specific covariates affect an adolescent’s usage of each substance: this will therefore lead to a correlation in usage across substances for the person. Not accounting for the across-person effects as shown in dotted lines–for example, how the smoking behavior of person 1 affects the smoking behavior of person 2 through a social influence or a selection effect–will result in these correlations being inappropriately captured by the dashed paths. Such correlations would be spurious, thus highlighting why it is critical to account for these network effects when studying concurrent substance use behavior. Although we do not show them here , this figure could also represent pathways linking person 1 smoking and person 2 marijuana use, and analogously, person 1 drinking and person 2 marijuana use. These pathways may be a result of normative processes. The subjective norms construct from the Theory of Planned Behavior, which is the composite of the belief about whether most people approve or disapprove of a behavior and their corresponding motivation to comply with those important referents in their social environment, informs these normative pathways in our model.

Adolescents who are smoking cigarettes, may through such normative beliefs, reinforce the use of other substances among their friends as they display pro-substance use norms and thus their friends may be motivated to comply with their attitudes and behavior, which would indirectly increase friends’ acceptability of using marijuana or alcohol. We are aware of just two existing studies that have simultaneously studied adolescent social networks and the use of more than two substances. One longitudinal study focused upon smoking, drinking, and marijuana use in a sample of 129 Scottish youth finding that while there were statistically significant peer influence effects on alcohol and marijuana use but not on smoking behavior, marijuana users smoked cigarettes more over time. However, the other study, detected neither interdependent association effects nor peer influence effects on cigarette, alcohol,hydroponic table and marijuana use behaviors in a longitudinal study of a sample of US school students. Note that the first study utilized a relatively small sample and the second study is limited to two waves of data, thus diminishing the statistical power of each study. Finally, social influence may not necessarily have symmetric effects on initiation and cessation of substance use. For example, Haas and Schaefer explored this idea in the context of smoking, and found some evidence that influence effects may have a stronger effect on starting smoking behavior, but weaker effects on stopping it. Although we do not have specific hypotheses regarding how such influence effects might operate for other substances such as alcohol or marijuana use, we nonetheless test this asymmetry possibility here in our analyses. Building on past studies focusing on concurrent or sequential substance use in adolescence, we examine the co-evolution of adolescent friendship network ties and whether there was interdependence in usage of cigarettes, alcohol, and marijuana among 3,128 adolescents in two large schools. We utilize three waves of social network data from the National Longitudinal Study of Adolescent to Adult Health. Ecological models of human development informed the conceptualization of this study by situating adolescents in key social contexts exerting primary socialization forces including peer selection, peer influence, and parental influences. This study is also informed by the Dynamic Social Impact Theory , which forms the basis for why youths’ behaviors will be correlated. Lastly, normative constructs from the Theory of Planned Behavior guide our examination of the normative model pathways under study.The respondent record/information was anonymized and de-identified prior to analysis. This study was reviewed and granted approval under exempt review by the Institutional Review Board at the University of California, Irvine. This study does not employ human subjects directly, as our analyses utilize secondary data, which are de-identified. Written informed consent was given by participants for their answers to be used in this study. We construct separate samples for the two large saturation sample schools, one suburban Northeast public high school referred to as “Sunshine High” , and one rural Midwest public high school referred to as “Jefferson High” .

Our data come from the Add Health In-School Survey , the wave 1 In-Home Survey , and the wave 2 In-Home Survey . Therefore, the average time spans between wave 1 and wave 2 are 8.8 and 7.7 months for students in Sunshine High and Jefferson High, respectively. The average time spans between wave 2 and wave 3 are 10.9 and 11.1 months for students in Sunshine High and Jefferson High, respectively.We utilize the R-based Simulation Investigation of Empirical Network Analysis software package to estimate Stochastic Actor-Based models. We specify each model with three behavior equations in which we focus on how usage of one substance is influenced by the usage of the other two substances, along with one network equation in which we model the network evolution in tie formation and dissolution among adolescents in the school. We estimated the model separately on each school. Besides the key mechanisms illustrated in Fig 1, we adopt a forward selection approach for each parameter via score-type test. In the behavior equations, the linear and quadratic effects capture the time trend of each substance use behavior; peer influence effects are measured as the sum of negative absolute difference between ego’s and alters’ behavior averaged by ego’s out-degree. Additional covariates such as in-degree, parental support, parental monitoring, race , and depressive symptoms are added given that they have been shown to be important covariates in the existing literature, and given that the results from score-type tests reject the null hypothesis that their parameters are 0s. In-degree is important to test, given the debate in the existing literature about the importance of network centrality, or popularity, for explaining substance use. We also control for the effects of how ego’s use of one substance was influenced by alters’ use of two other substances. In the network equation, we include endogenous network effects and homophily selection effects for each substance use behavior as well as additional covariates such as race , gender, grade, and parental education as the results from score-type tests suggest to do so. 501 students in Sunshine High and 166 students in Jefferson High were 12th-graders at t1 and t2and graduated at t3. These 667 students were constructed as structural zeroes in the networks during the last wave. Due to a survey implementation error in Add Health, some adolescents could only nominate one female and one male friend at t2and t3. We account for this with a limited nomination variable in the network equation. A Method of Moments estimation is used to estimate the behavior and network parameters in each model so that the target statistics in behaviors and networks can be most accurately calculated. We assess satisfactory model convergence with criteria of t statistics for deviations from targets and the overall maximum convergence ratio. The results of a post hoc time heterogeneity test for the models found no evidence that the co-evolution of substance use behaviors and friendship networks was significantly different across the two time periods, providing no indication of estimation or specification problems. We also perform goodness-of-fit testing for key network statistics in both schools, and display the results in the S1 File. Besides the main SAB model for each school sample, we estimate ancillary models that test whether the interdependent effects are symmetric in increasing and decreasing substance use. This is accomplished by differentiating the “creation” function and the “endowment” function in RSiena. This technique has been applied to explore the asymmetric peer influence effect on adolescent smoking initiation and cessation.The dependent network variable, friendship tie choice, is based on the question asking adolescents to nominate up to five female and up to five male best friends.

The majority of participants had Medicaid coverage for NRT without a copay

The 1.5-hour training included a refresher on smoking cessation counseling and included in depth information on cessation medication options, how to use each product, and potential side effects. Cessation Champions provided onsite support for clients, including referring potential interested clients to our study, and liaised with the study staff on behalf of clientele. They received a $50 gift card after completing the training and a $75 gift card upon study completion. The Phase 3 medication assistance program took place at the two pilot shelters between August 2020 and June 2021; intervention roll-out at the two sites was staggered by 6 months. At each site, Cessation Champions completed training prior to the roll-out. We partnered with a community pharmacy that had pharmacists who could counsel for smoking cessation, prescribe NRT, and deliver medications on-site to the shelters. We recruited participants via word of mouth, flyers, and targeted outreach to known smokers by the Cessation Champion at that site. Participants were eligible if they were 1) at least 18 years old, 2) residents at one of the sites, 3) currently smoking at least 5 cigarettes per day, 4) interested in quitting within the next month, and 5) willing to use medications for smoking cessation. Upon enrollment, study staff placed a referral to the pharmacy using a secure online portal. Once the pharmacy received the referral, study staff facilitated an interaction between the pharmacist and the participant in which the pharmacist assessed smoking history, provided counseling, and determined appropriate NRT dosing. In most cases,hydroponic stands pharmacists offered combination NRT with the long-acting patch and short-acting gum/lozenge, unless the participants requested a specific form of NRT.

Pharmacists then prescribed the NRT, and arranged for delivery of NRT to the site within one week.Pharmacists had follow-up phone calls with some participants who wanted their prescription NRT dose tapered. We evaluated Phase 1 using process measures including number of shelters participating and number of staff trained. In Phase 2, we evaluated the number of Cessation Champions trained. We did not ask shelter staff to complete questionnaires after their trainings in Phase 1 or Phase 2. In Phase 3, study staff administered an online questionnaire to resident participants at baseline, during 11 weekly follow-up visits, and at 3-months follow-up. Additionally, study staff kept informal notes during the implementation process in Phase 3 on barriers to and facilitators of obtaining medications; we report these as process measures. Participants received a $15 gift card for completing the baseline questionnaire, a $5 gift card for each weekly follow-up questionnaire, and a $20 gift card for the questionnaire at 3-months follow-up. We described sample characteristics and tobacco use at baseline using proportions for categorical variables and median for continuous variables. We estimated cumulative proportions of quit attempts, encounters with study staff on smoking, use of NRT, type of NRT used, median number of days used, and reasons for not using. We used mixed effects Poisson and logistic regression models, accounting for repeated measures within participants to examine factors associated with weekly cigarette consumption and quit attempts , respectively. We adjusted for age, gender, baseline time to first cigarette after waking, baseline cigarette consumption as fixed effects in the model, and cigarette consumption, encounters with shelters staff on smoking, and use of NRT in the past week as random effects. Intra-subject correlation of repeated observations was accommodated using a random intercept for each subject.

Statistical analysis results are described using predicted counts and probabilities for interpretability. We conducted analyses in Stata 16. In this uncontrolled pilot study, we explored the feasibility of implementing a community pharmacy-linked smoking cessation program to improve access for PEH. We found that the program was feasible to implement and reduced cigarettes per day. Onsite access to medications and encounters with staff about smoking were the primary factors associated with reduction in tobacco use. These findings highlight a role for interventions that increase shelter capacity to offer cessation services linked with community pharmacist-delivered interventions. Clinical practice guidelines for smoking cessation recommend behavioral counseling combined with pharmacotherapy given that NRT effectiveness without smoking cessation counseling is limited. More frequent counseling encounters, regardless of clinician type, are positively associated with cessation. Consistent with these guidelines, we used a phased approach of building capacity to provide cessation counseling among shelter staff, followed by medication assistance for participants provided by pharmacists. In a previous capacity-building intervention for shelter staff, we found that training staff to provide cessation counseling was associated with fewer barriers and increased efficacy in delivering counseling. About half of the participants reported conversations with staff about smoking, and interactions with staff were associated with a 40% reduction in consumption. These findings suggest that fostering shelter staff counseling support for clients interested in smoking cessation is feasible and effective. Future research will query staff on their experience with counseling clientele on smoking cessation, which may influence counseling quality. Medication use in the past week was significantly associated with both reduction in consumption and an increase in quit attempts. Most quit attempts among PEH are unassisted, highlighting the need to improve medication access.

On average, study participants used NRT four days per week, and over 50% reported using medications for greater than 7 days in at least one week. Consistent with previous studies, the most common reasons for not using medications were concerns about access, side effects, and lack of efficacy. The primary barrier to receiving on-time delivery of medications for cessation was connecting community pharmacists with participants by phone. Despite study staff facilitating these interactions, pharmacists were often unable to provide point-of-care counseling or participants had competing priorities at the time of referral, leading to delays in initiating treatment. Over 70% of PEH report having cell phones,grow table however inconsistent service limits their use. Future studies could consider providing participants with cell phones to facilitate communication with members of their healthcare and social services teams. Our study had limitations. The sample size was small and involved two shelters in a single city, limiting generalizability. Only one participant had quit at the end of the study, however, point prevalence abstinence was not a primary outcome in this study. We assessed tobacco use, quit attempts, staff encounters, and medication use using self-report. Future studies could verify these findings by assessing biochemically verified abstinence, as well as considering longer-term outcomes. To our knowledge, this study is the first to explore a smoking cessation care model for PEH where community pharmacists partnered with shelters. The findings have implications for expanding access to cessation services at different service sites for PEH and suggest that community pharmacy-linked model of cessation care can increase access to services and medications. Expanding access to cessation services is the first step to reducing tobacco use among PEH,a population disproportionately impacted by tobacco use and that faces substantial structural barriers to receiving healthcare services. Health professional shortage areas are communities identified by the U.S. Human Resources and Services Administration in which there is a shortage of primary care health professionals.These shortages are accompanied by an absence of a consistent source of care, difficulty accessing care when needed, and a lack of outpatient preventative care, leading to increased hospitalizations.Multiple interventions have been attempted to increase access to care in HPSAs, including increased use of nonphysician providers. During the opioid epidemic, increasing access to naloxone furnishing has been viewed as critical in rural areas where opioid misuse is disproportionately high, including California’s Central Valley.In the United States, pharmacists at community pharmacies are one of the most accessible points of care, with 90% of Americans living within 5 miles of a pharmacy.People seeking care have expressed interest in services at pharmacies not only because of ease of accessibility but also the availability of multilingual staff and extended hours that make it possible to access care on evenings and weekends.Previous studies have also shown that pharmacy-based care can extend services for patients in medically under served rural areas to reduce inappropriate prescribing, improve disease management, and enhance medication adherence and knowledge.

In 2013, the California legislature passed SB 493, known as the Pharmacy Practice Bill, which expanded the role of pharmacists by giving them authority to furnish naloxone, hormonal contraception, nicotine replacement therapy, and travel medications, specifically prescription drugs and immunizations that are recommended by the Centers for Disease Control and Prevention to prevent or treat disease when travelling outside of the United States.California uses the term “furnish” to describe pharmacist-initiated prescription of medications.Expansion of pharmacist furnishing capabilities provides access to those in need, including people who use opioids.Past studies have sought to determine rates of pharmacist furnishing given its potential impact on access to care. However, these studies have focused on urban areas; 2 previous studies of pharmacist furnishing of naloxone in California sampled primarily urban pharmacies ; 3 previous studies on naloxone, hormonal contraception, and post exposure prophylaxis/preexposure prophylaxis furnishing were conducted in the San Francisco Bay Area only.As of the date of this study, there has been no prior research assessing furnishing rates in California’s Central Valley, a largely rural area, with a shortage of primary care physicians.However, understanding furnishing in these communities and those like it, particularly for naloxone, is critical given the disproportionate impact of the opioid epidemic in rural communities. For example, the age-adjusted rate of opioid-related overdose deaths in Fresno, one of the Central Valley’s largest counties, increased by 46%, from 48.6 per 100,000 residents in 2019 to 71 per 100,000 residents in 2020.This study sought to address this existing gap in research by assessing the extent of pharmacist furnishing, with a focus on naloxone, in the Central Valley. Research focused on the Central Valley due to the high potential impact of furnishing to increase access to care. It first assessed the extent of naloxone furnishing through a phone survey, then identified barriers and facilitators to implementation through interviews with a subset of furnishing pharmacists identified in the phone survey. We expected that rates of naloxone furnishing would be lower in disproportionately rural Central Valley pharmacies than in urban pharmacies evaluated in previous research, given the effects of high out-of-pocket costs in an area where people have lower incomes and social stigma surrounding opioid use disorders in more politically conservative communities.The first step of data collection was a telephone survey of all pharmacies with the potential to furnish naloxone in the Central Valley, to identify overall furnishing rates. Four authors who were PharmD students, in collaboration with undergraduate researchers at the University of California Merced Nicotine & Cannabis Policy Center , first contacted all pharmacies that met inclusion criteria using the telephone number listed in the Board of Pharmacy license database. Using an existing screening question from previously published research on naloxone furnishing, upon initial contact an interviewer posed the question, “I heard that you can get naloxone from a pharmacy without a prescription from yourdoctor. Can I do that at your pharmacy?” 19,20 Contact with each pharmacy was attempted up to 3 times. To identify potential interview contacts in the second step of data collection, interviews of furnishing pharmacists, each person at a pharmacy who that indicated it furnished naloxone was asked whether a furnishing pharmacist at the store would be interested in being interviewed for the study. If a pharmacist expressed interest, they received a cover letter, consent forms to sign by email or fax, and a list of interview questions. Researchers scheduled a time to interview after receiving this written consent. Pharmacies that did not furnish naloxone were not asked for interviews on the grounds that they would be unable to identify facilitators to furnishing naloxone at their store.Participants were interviewed in a semistructured manner using an interview instrument used in previously published research to study furnishing of other medications and modified to address naloxone.This instrument included a list of questions, however each interview was conducted in a semi-structured format that allowed for a natural flow of discussion and gave participants opportunities provide additional information that may not have been specifically addressed in the prepared questions.Topics included the following: characteristics of the pharmacy and staff ; description of the furnishing process; perceptions regarding the effectiveness, advantages, disadvantages, facilitators, and barriers to furnishing; whether respondents also furnished other medications; and recommendations for reproducibility or improvement.

A cross-index of identified information will be kept in a separate locked location

An independent Data and Safety Monitoring Board of external advisors will also meet prior to the start of the study, annually during enrolment and follow-up and at trial end to review safety data. The risk of breach of confidentiality will be handled by emphasizing that information obtained during assessments and laboratory sessions is confidential and will be used solely for research purposes. All records will be kept in a locked file cabinet and will be available to research personnel who have been trained in human subjects’ protection guidelines.In addition, all data will contain only a numeric code, all assessment procedures will be closely supervised by the faculty sponsor, and staff will be trained and reminded of the need to keep all information confidential. No names will be used in presenting data in lectures, seminars, and papers. Individual study participants will not be identified in any way in any presentation or publication of the study results and analysis of the results will be based on aggregate data only. Medical information will be released only with the expressed written consent of the subject. Lastly, the PI has obtained a Certificate of Confidentiality in order to further protect participant confidentiality during the study.As outlined in exploratory aim 2, serum samples will be collected from all participants at randomization and at 4, 8, and 12-week follow-ups. Two lavender EDTA tubes will be collected for plasma to identify markers including innate immune receptors , cytokines , chemokines ,rolling flood tables and other inflammatory signaling molecules. In addition, one RNA PAXgene tube will be collected at randomization and week 12 to identify transcription factors that code for cytokines.

As outlined in exploratory aim 4, salivary cortisol samples will be collected from participants who choose to participate in the neuroimaging session at 3 points during the week 4 neuroimaging visit: at the beginning of the visit and immediately before and after the fMRI session. Saliva will be collected via Salivette swab. Biological samples will be stored in freezers. Serum samples will be assayed by a laboratory technician with extensive expertise processing biological samples for analyses of inflammatory markers. Salivary cortisol samples will be shipped to an outside lab for processing.Data analysis will utilize an intention-to-treat population that includes all randomized patients who took at least one dose of medication and provided valid post-randomization outcome data. The primary tests of hypotheses will use percent heavy drinking days measured by the TLFB at weeks 2, 4, 6, 8, 10, and 12 as a priori primary efficacy endpoint. Other outcomes will also be analyzed as described in the secondary and exploratory aims. Prior to statistical analyses, the data will be inspected to determine the advisability of scale transformations and to identify missing data, outliers, or other unusual features that may be influential. Preliminary analyses will also be performed to compare treatment groups on descriptive and clinical characteristics at baseline to ensure that randomization has succeeded. If confounding variables are found, they will be included as covariates in follow-up analyses.The a priori primary efficacy endpoint will be percent heavy drinking days, defined as 4+ drinks for women/5+ drinks for men, measured bi-weekly during the maintenance phase of the study. Patients who discontinued medication will be allowed to remain in the study and participate in study assessments.

The primary efficacy analysis will be performed using a repeated measures mixed effects model that includes treatment, time, treatment × time interaction, a random intercept and a random slope, and adjusts for other covariates such as demographic and baseline variables as appropriate. The mixed effects model approach permits testing of between group differences, within-group changes, and performance trends over time. It also uses all observed repeated measurements data, treating the missing data mechanism as ignorable. In addition to testing the treatment effects, a summary of least-square means, standard errors, and 95% confidence intervals will be presented for each treatment and will be derived from fully adjusted models on untransformed outcomes averaged across the maintenance period.In this aim, we plan on traditional analyses of the effects during the maintenance phase of the study on the following secondary alcohol consumption endpoints: drinks per day, drinks per drinking day, percent days abstinent, percent subjects with no heavy drinking days , and percent subjects abstinent. The analytical plan for the secondary outcomes with repeated measures are similar to that for the primary efficacy endpoint as discussed above for aim 1. For the dichotomous outcomes , logistic regression models will be used. Further, in light of recent research on AUD endpoints, we will examine secondary outcomes when allowing an optimal grace period of first 4 weeks and will evaluate the efficacy of IBUD over the maintenance period.Explanatory variables for the alcohol cues task will be created by convolving delta functions representing the onset of experimental events with a doublegamma hemodynamic response function in FEAT. Temporal derivatives will be included as covariates. Second-level group analyses will then be conducted. The main contrast of interest will be activation during alcohol vs. non-alcoholic beverage blocks. Explanatory variables for the MIST will be created by convolving delta functions representing the onset of experimental conditions with a double-gamma hemodynamic response function in FEAT. Temporal derivatives will be included as covariates.

Second-level analyses averaging over the three task runs will be conducted on the contrast images transformed into standard space. Third-level group analyses will then be conducted on the second-level images. The main contrast of interest will be activation during the stress vs. control blocks. For both tasks, Z-statistic images will be thresholded with cluster-based corrections for multiple comparisons based on the theory of Gaussian Random Fields with a cluster-forming threshold of Z > 2.3 and a cluster-probability threshold of p < 0.05.We will examine if the effects of IBUD on the efficacy outcomes are moderated by depressive symptomatology. A moderator identifies for whom or under what conditions a treatment works. It may suggest which participants will respond most to treatment or identify subgroups with possibly different causal pathways. We will study moderators based upon criteria given in Kraemer et al.. For the repeated measured efficacy outcomes, we will include depressive symptomatology in the analyses, using the mixed effects analysis designs described above,flood and drain tray and testing the interactions of × treatment as well as × treatment × time. For the dichotomous outcomes , we will include depressive symptomatology in the logistic regression analyses and test the interactions of depressive symptomatology × treatment. To reduce confounding of main effects with these interaction terms and increase the interpretability of the regression coefficients, the variables will be centered as recommended by Kraemer and Blasey. If interactions are significant, we will estimate treatment effects at low, middle, and high values of the moderator. Further, we will test whether physiological dependence serves as a moderator of medication effects in this trial.An annual summary of adverse events will be submitted to the FDA, the IRB, the DSMB, and NIAAA. The analysis of all adverse events accumulated-to-date will include a listing of all adverse events. Participants’ descriptions of adverse events from AE Forms will be grouped in some reasonable way, counted, and compared by treatment groups. A designation of “more common and drug-related” will be given to events occurring at an incidence of least 5% in subjects assigned to active drug, and for which the active drug incidence is at least twice the placebo incidence. Other significant adverse events that will be reported include the following: marked abnormalities in laboratory, vital signs, electrocardiograms or other parameters, and adverse dropouts and adverse events that lead to dose adjustments or to the addition of concomitant therapy. The study physician will be available to participants for the entire duration of the study. Participants will have access to her 24-h pager and will report on adverse events at each monthly visit. The study physician will call every participant at the end of the first week on the study medication to discuss and manage any adverse events. Study staff will notify the study physician of any adverse events recorded during the follow-up visits. Side effects will be collected through an open-ended question asking participants to report of any adverse events they may be experiencing and a questionnaire-based assessment, the Systematic Assessment for Treatment Emergent Events, which will be administered at the 4, 8, and 12-week follow-up visits.

To continuously monitor safety, clinical labs will be repeated at each in-person follow-up visit and abnormal results will be discussed with the study physician.The PI will designate appropriately qualified personnel to periodically perform quality assurance checks at mutually convenient times during and after the study. These monitoring visits provide the opportunity to evaluate the progress of the study and obtain information about potential problems. The monitor will assure that data are accurate and in agreement with any paper source documentation used, verify that subjects’ consent for study participation has been properly obtained and documented, confirm that research subjects entered into the study meet inclusion and exclusion criteria, verify that study procedures are being conducted according to the protocol guidelines, monitor review AEs and SAEs, perform drug accountability, and assure that all essential documentation required by Good Clinical Practices guidelines are appropriately filed. At the end of the study, they will confirm that the site has the appropriate essential documents on file, advise on storage of study records, and inspect the return and destruction records for unused study medication. An independent Data and Safety Monitoring Board of external advisors will meet prior to the start of the study, bi-annually during enrollment and follow-up and at trial end to review safety data. In addition to bi-annual meetings, the DSMB will meet after half the subjects have been randomized to review safety data and the integrity of the study and make a formal recommendation to the PI on the continuation or early stopping of the study due to safety concerns. The DSMB will provide periodic review of the protocol, which is consistent with current practices at the UCLA CTRC, where this study will take place. The DSMB will provide comprehensive and regular input into whether there are appreciable changes to subjects’ risks to participation while the study is ongoing. The DSMB will monitor the following six aspects of study execution: Administrative/ Regulatory Updates, Study Updates, Quality Assurance and Safety Monitoring Procedures, Study Accrual, Protocol Violations/Deviations, and Safety and Outcomes. After reviewing all these elements of the study, the DSMB will provide recommendations regarding safety/ethical concerns, study continuation, and protocol modifications. In addition, the DSMB will assist the PI and Study Physician to evaluate whether an active subject should be discontinued from further participation in the study for safety reasons.The PI will promptly inform the NIAAA Program Officer of any changes in recruitment or in the protocol that are relevant to safety, as well as any actions taken by the IRB as a result of its continuing review of the study. All necessary protocol changes will be submitted in writing as protocol amendments to the IRB by the PI for approval prior to implementation. In the event of any major changes in the status of any ongoing protocol, the PI will inform the NIAAA Program Officer, DSMB, IRB, CTRC, etc., immediately. Such changes would include amendments to the protocol, temporary suspension of patient accrual or of the protocol, any changes in informed consent or IRB approval status, and termination of patient accrual or of the protocol.IBUD is a promising treatment for AUD as a neuroimmune modulator that has shown robust safety and early efficacy. In a preliminary study conducted by our lab , IBUD was generally safe and well-tolerated, with no study dropouts or dose reductions over the course of the protocol, in a population with mild-to-severe AUD. The current study is supported by these early clinical results, as well as by compelling preclinical data validating its molecular targets and effects on alcohol phenotypes in animal models. To the best of our knowledge, from a search of ClinicalTrials.gov as of May 16, 2020, at present the only registered trials of ibudilast for the treatment of alcohol use disorder have come from our lab, and the current study is the only interventional large-scale randomized clinical trial, testing IBUD in 132 treatment-seeking participants with AUD. This single-center trial is double-blinded vs placebo, with 1:1 randomization.

MGL is the major degradative enzyme of 2-AG in the mouse brain

What may be the synaptic function, which necessitates principal neurons to target DGL-α so precisely into dendritic spine heads? DGL-α synthesizes 2-AG from diacylglycerol, the common second messenger produced upon Gq/11-coupled receptor activation and phospholipase C-β activity. The most abundant Gq/11-coupled receptor and PLC-β-type enzyme in hippocampal dendritic spine heads is the metabotropic glutamate receptor type 5 and PLC-β1, respectively , and indeed, activation of mGluR5 leads to endocannabinoid-mediated retrograde synaptic suppression , and the elevation of 2-AG levels through PLC-β1 and DGL-α activity. Because 2-AG inhibits glutamate release from excitatory nerve terminals inhippocampal neurons , this negative feed-back pathway operating as a “synaptic circuit-breaker” may have a pivotal functional significance in controlling network excitability during neuronal insults leading to excitotoxicity. Although the medical importance of such a protective messenger system is obvious, it is not yet clear if the same molecular machinery is functional at excitatory synapses of human neurons as well. The postsynaptic accumulation of DGL-α at excitatory synapses in human hippocampal samples described in the present study along with evidence that mGluR5 is also present postsynaptically at excitatory synapses both in the human hippocampus and in primate cortical areas underlies this notion, though similar data are not yet available for PLC-βs in humans. Interestingly, an independent structural support derives from the striking similarity in the density of DGL-α and mGluR5- immunostaining in relation to given hippocampal layers. For example, in the dentate gyrus, higher concentration of DGL-α was found in the inner third of the molecular layer than in the outer two-thirds of stratum moleculare both in rodents and in humans ,cannabis dry rack underlying the observation that excitatory inputs of granule cells received from mossy cells may be more tightly controlled by endocannabinoids than afferents from the entorhinal cortex. 

Similarly to DGL-α distribution, the density of mGluR5-immunostaining is more pronounced in the inner molecular layer both in rodents and in humans. Finally, among glutamatergic terminal types, the concentration of CB1 receptors is also highest in those arborizing in the inner molecular layer. Whether this intensity difference reflects the higher density of excitatory synapses in the inner molecular layer both in rodents , and in humans or it is due to synapse-specific variations in the regulation of synaptic 2-AG signaling needs to be established in further experiments. Nevertheless, the weaker density of MGL-immunostaining in the inner molecular layer indicating reduced capacity for 2-AG inactivation by MGL at mossy cell synapses gives some indirect support for the latter possibility and emphasizes that the termination of 2-AG signaling may be specifically regulated in the human hippocampus as well.We provide anatomical evidence that MGL, the main degrading enzyme of 2-AG has a widespread distribution in the human hippocampal formation. The nature of MGLimmunoreactivity was comparable to DGL-α-immunostaining at the light microscopic level, with the profuse punctate labeling covering the neuropil and outlining the laminar structure of the hippocampus. Electron microscopic analysis uncovered that this compartmentalized staining pattern is due to the accumulation of immunolabeling at excitatory synapses. However, in striking contrast to the postsynaptically localized DGL-α enzyme, MGL was present presynaptically in glutamatergic axon terminals. This anatomical observation is in agreement with previous findings obtained in the rodent hippocampus , and it is also supported by recent physiological experiments demonstrating that MGL limits the duration of synaptic depression at hippocampal excitatory synapses. In addition, although further immunohistochemical studies using antibodies with higher sensitivity may reveal that MGL is not fully restricted to glutamatergic synapses, the present findings indicate that the highest concentration of MGL protein is likely located in excitatory boutons in the human hippocampus.It is estimated that approximately 85% of the brain’s 2-AG hydrolysis activity is accounted for this serine hydrolase. Its widespread distribution in excitatory axon terminals in the human hippocampus suggests that MGL may also play a similarly important role in 2- AG hydrolysis in the human brain.

Together with our previous findings demonstrating the ubiquitous presence of CB1 cannabinoid receptors on the same excitatory terminals , these data collectively corroborate that MGL is the key enzyme terminating synaptic 2-AG signaling after activation of presynaptic CB1 receptors in the human hippocampal formation. Given that acute in vivo administration of JLZ184, the most potent selective inhibitor of MGL currently available, replicates nearly all of the characteristic behavioral effects of Δ9 – tetrahydrocannabinol by protecting endogenously released 2-AG from degradation , it is conceivable to hypothesize that JLZ184 may have a similar effect on the human brain based on the similar neuroanatomical localization of MGL in rodents and humans. Therefore, although MGL inhibitors hold great therapeutic potential in several medical applications , their predicted psychoactive side effects based on their cannabimimetic properties in animals , should be taken into consideration when pondering the use of these compounds in humans. Cannabis and alcohol are two of the oldest drugs used by humans. Together with nicotine, they represent a relevant health problem because of the clinical consequences of their abuse. Their psychotropic effects are well known and recent research has shown that there is a close link between cannabis and alcohol. The endogenous cannabinoid system [a functional set of lipid transmitters and receptors that is the target of both natural and synthetic cannabinoids ] has been shown to mediate some of the pharmacological and behavioral aspects of alcohol. Both cannabinoids and alcohol activate the same reward pathways and the cannabinoid CB1 receptor plays an important role in regulating the positive reinforcing effects of alcohol as well as alcohol relapse. Several studies have documented that endocannabinoid transmission becomes hyperactive in reward-related areas during chronic ethanol administration. This hypothesis is based on two findings. First, the increase in the levels of both anandamide and 2-arachidonylglicerol, the two main endocannabinoids,planting racks observed in animals chronically exposed to ethanol. Second, the down-regulation of CB1 receptors induced by endocannabinoid-mediated over-stimulation. Following this rationale, cannabinoid CB1 receptor knockout mice show reduced alcohol preference and self-administration. In this experiment, rats were tested under a progressive ratio schedule of reinforcement to measure the break point for ethanol.

For this purpose, animals were first trained to self-administer 10% alcohol under a fixed ratio 1 schedule of reinforcement. Following the acquisition of a stable baseline of responding for 10% ethanol, rats were tested under the progressive ratio condition, in which the response requirement was increased as follows. For each of the first four ethanol deliveries the ratio was increased by 1; for the next four deliveries the ratio was increased by 2 and for all of the following deliveries the ratio was increased by 4. Each ethanol-reinforced response resulted in a 1.0 s illumination of the house light, whereas sessions were terminated when more than 30 min had elapsed since the last reinforced response. Drug testing was carried out once a week as follows. The progressive ratio baseline was established on days 1 and 2, whereas progressive ratio drug testing took place on day 3. For the next 2 days, animals were placed in the chambers under fixed ratio 1 condition to re-establish the ethanol self-administration baseline, whereas on days 6 and 7 they remained confined to their home cages. AM404 or its vehicle was given 30 min before the progressive ratio session. The experiment was repeated for the following 2 weeks, counterbalancing the treatment.At completion of the fading procedure, animals were trained to discriminate between 10% ethanol and water in 30 min daily sessions. Beginning with self-administration training at the 10% ethanol concentration, discriminative stimuli predictive of ethanol vs. water availability were presented during the ethanol and water self administration sessions, respectively. The discriminative stimulus for ethanol consisted of the odour of an orange extract , whereas water availability was signaled by an anize extract. The olfactory stimuli were generated by depositing six to eight drops of the respective extract into the bedding of the operant chamber. In addition, each lever-press resulting in delivery of ethanol was paired with illumination of the chamber’s house light for 5 s. The corresponding cue during water sessions was a 5 s tone. Concurrently with the presentation of these stimuli, a 5 s time-out period was in effect, during which responses were recorded but not reinforced. The olfactory stimuli serving as S+ or S– for ethanol availability were introduced 1 min before extension of the levers and remained present throughout the 30 min sessions. The bedding of the chamber was changed and bedding trays were cleaned between sessions. The rats were only given ethanol sessions during the first 3 days of the conditioning phase. Subsequently ethanol and water sessions were conducted in random order across training days, with the constraint that all rats received a total of 10 ethanol and 10 water sessions.Reinstatement tests began the day after the last extinction session. These tests lasted 30 min under conditions identical to those during the conditioning phase, except that alcohol and water were not made available.

Sessions were initiated by the extension of both levers and presentation of either the ethanol S+ or water S– paired stimuli. The respective discriminative stimulus remained present during the entire session and responses at the previously active lever were followed by activation of the delivery mechanism and a 5 s presentation of the CS+ in the S+ condition or the CS– in the S– condition. Animals were tested under the S+ ⁄ CS+ condition on day 1 and under the S– ⁄ CS– condition on day 2. Subsequently, reinstatement experiments were conducted every fourth day , in which AM404 was administered 30 min prior to the sessions. Responding at the inactive lever was constantly recorded to monitor possible non-specific behavioral effects.In a subsequent experiment, we tested the efficacy of AM404 as a modulator of not only the operant responses for ethanol but also the operant responses elicited by the contextual stimuli associated with alcohol. As the highest dose tested resulted in significant inhibition of locomotion, we did not administer it in this context. Once a stable extinction baseline was observed, we induced relapse by presenting cues associated with ethanol delivery during training. Ethanol-related contextual stimuli elicited ethanol-seeking behavior, as operant responses induced by ethanol-associated stimuli were more intense and significantly higher than those observed on the last day of extinction. When AM404 was injected 30 min prior to cue presentation, it failed to alter the responses for ethanol seeking , indicating that anandamide uptake inhibition was not effective in preventing cue-induced relapse.The major finding of the present study is the demonstration that acute administration of the anandamide transport inhibitor AM404 reducesethanol self-administration under an operant conditioning schedule. This compound does not affect the relapse induced by contextual cues associated with ethanol. The effects of AM404 seem to be selective for ethanol, as it was unable to suppress responding for other reinforcers, such as saccharin or food intake, suggesting that this effect is not related to a decrease in a general motivational state. This is confirmed by the lack of action of AM404 on the motivational properties of ethanol, as measured in the progressive ratio paradigm. This suppressive effect of AM404 on ethanol self-administration seems to be independent of the already known anandamide-induced motor impairment, as the lowest effective dose tested did not alter motor behavior in the open field. Moreover, the actions of AM404 were found to be independent of a potentiation of the sedative effects of ethanol. Finally, neither experiments with cannabinoid CB1 receptor agonists nor with cannabinoid CB1 and CB2 receptor antagonists allowed us to obtain a direct pharmacological confirmation of the role of known cannabinoid receptors on the effects of AM404. The finding of a similar profile of effects using ACEA, a selective cannabinoid CB1 receptor ligand that shares the arachidonoyl moiety with both anandamide and AM404, suggests a common unknown target responsible for the effects of AM404 on ethanol self-administration. The lack of effects of WIN 55,212-2 and HU-210 at doses devoid of motor side-effects suggests that AM404 does not exert its actions through a CB1 receptor-mediated mechanism. AM404 was the first synthetic inhibitor of anandamide uptake and it has been shown to potentiate many effects elicited by anandamide in vitro and in vivo. 

The design of the Island simulation is that it should run in real time

Given that historical experiences of violence and trauma denote significant risk for suicide, there is an urgent need to provide integrated, trauma-informed intervention services for sex workers and other marginalised populations. Currently available interventions and pharmacological treatments for suicidality show limited efficacy, and concerted efforts should be made to increase access to evidence-based treatments and to explore alternative approaches to improving mental health and well-being. Emerging research and evidence show positive outcomes with psychedelic-assisted treatments, which have demonstrated an excellent record of safety with few to no serious adverse effects reported.This study suggests psychedelic substances may hold promise as useful tools in addressing mental health issues and remediating risks for psychological distress and suicide.Gut-brain signaling plays an integral role in food intake, energy homeostasis, and possibly reward. Our understanding of the biochemical and molecular pathways involved in these processes and their dysregulation in obesity, however, remains incomplete. Several signals, including gut-derived peptides, have been identified that control neurotransmission from peripheral organs to the brain. These include cholecystokinin , which is released from sub-populations of enteroendocrine cells in the upper small-intestinal epithelium in response to the presence of nutrients in the lumen and controls food intake and meal size by activating the afferent vagus nerve. Recent studies in mice suggest that specialized enteroendocrine cells in the intestinal epithelium,drying racks termed “neuropods”, form functional synapses with gastric afferent vagal fibers and participate in the transduction of signals from food to neural signals carried by vagal afferent neurons to the brain.

Neuropods sense nutrients on their luminal side and, in turn, release glutamate and CCK in a coordinated manner that induces rapid or prolonged firing of vagal afferent neurons, respectively. These results highlight neuropods as a key cellular mechanism in nutrient sensing and associated gut-brain signaling. Other studies suggest that vagal afferent neurotransmission recruits brain reward circuits and may participate in food reward. For example, optogenetic activation of right gastric vagal afferent neurons increased dopamine release in central reward pathways, operant responses associated with self-stimulation of brain reward neurons, and conditioned flavor and place preferences. Specific biochemical and molecular signaling pathways that control these functions, however, remain unclear. The endocannabinoid system is a lipid-derived signaling pathway that controls food intake, energy homeostasis, and reward, and is hijacked by chemicals in the cannabis plant. In general, activating the eCB system increases food intake and inhibiting its activity reduces food intake. The eCB system is located throughout the brain and plays an important role in these functions; however, mounting evidence also suggests that the eCB system in peripheral organs, including the small-intestinalepithelium, serves an integral role. Indeed, pharmacological blockade of peripheral cannabinoid subtype-1 receptors reduces food intake and improves metabolic dysfunction associated with obesity in rodents similarly to brain-penetrant CB1R antagonists. These studies highlight the peripheral eCB system as a possible target for safe anti-obesity agents that are devoid of psychiatric side-effects associated with drugs that access CB1Rs in the brain. The eCB system in the rodent small-intestinal epithelium becomes activated during oral exposure to dietary fats, during a fast, and after chronic exposure to obesogenic diets. Moreover, pharmacological inhibition of peripheral CB1Rs blocked cephalic-phase consumption of dietary fats in rats, refeeding after a fast in rats, hyperphagia associated with western diet-induced obesity in mice, and restored nutrient-induced secretion of satiation peptides in western diet-induced obese mice.

These studies suggest a critical role for eCB signaling in the gut in the intake of palatable foods. We will review recent experiments that expand our understanding of roles for the eCB system in the gut in gut-brain neurotransmission associated with food intake, energy homeostasis, and reward. An emphasis will be on studies that reveal both indirect and direct mechanisms of control for CB1Rs over gut-brain signaling and dysregulation of these pathways in rodent models of diet-induced obesity. The eCB system is expressed in cells throughout all organs in the body and is comprised of lipid-derived signaling molecules including the primary eCBs, 2-arachidonoyl-snglycerol and arachidonoyl ethanolamide , their metabolic enzymes, and cannabinoid receptor sub-type 1 , cannabinoid receptor sub-type-2 , and possibly others. The eCB system in the brain is extensively studied for its roles in controlling the intake and reward value of palatable food. In addition to central sites, recent evidence suggests that the eCB system located in cells lining the intestinal epithelium is an integral component of a gut-brain axis that controls the intake of palatable foods. For example, a sham-feeding protocol in rats was utilized to test if eCB signaling in the gut is associated with positive reinforcement that drives intake of food based on its orosensory properties. During sham feeding, rats are allowed to freely consume a liquid diet that drains from a surgically-implanted, reversible, cannulae in the stomach before it reaches the small intestine. Therefore, sham feeding enables isolation of the cephalic phase of food intake and effectively eliminates post-ingestive consequences of food intake.Separate groups of rats were given access for 30 min to a fixed amount of dietary fats , sucrose, or protein, and levels of 2-AG and anandamide were measured in the upper small-intestinal epithelium by liquid chromatography/mass spectrometry. Tasting dietary fats—but not sucrose or protein—triggered production of eCBs in the upper small-intestinal epithelium, but not in other peripheral organs tested or in micro-punches obtained from brain regions associated with food intake and reward. This effect was also specific for mono- and di-unsaturated fats , but not saturated or polyunsaturated fats. Moreover, production of eCBs in the small-intestinal epithelium was absent in sham feeding rats that received full sub-diaphragmatic vagotomy, which suggests that efferent vagal signaling participates in the biosynthesis of eCBs.

Furthermore, intra-duodenal administration of a low-dose cannabinoid receptor subtype-1 inverse agonist or a peripherally-restricted CB1R antagonist blocked sham feeding of fats. Collectively, these studies suggest that tasting dietary fats recruits an eCB mechanism in the gut that provides positive feedback to the brain and promotes intake of fatty foods. The aforementioned studies utilized pharmacological, biochemical, and behavioral approaches to identify roles for peripheral CB1Rs in the intake of palatable food. At the time of these studies, however, appropriate tools were not available to directly ask if CB1Rs in the intestinal epithelium are required in these processes. To test the necessity for CB1Rs in the intestinal epithelium in the intake of palatable foods,greenhouse benches we developed transgenic mice that are conditionally deficient in CB1Rs in the intestinal epithelium. Mice were maintained on standard rodent chow low in fats and sugars, then given access for the first time to a palatable western-style diet high in fats and sugars , and preferences for western diet were measured. This specific western diet was chosen due to its macro-nutrient composition that more closely matches the human diet when compared to other obesogenic diets routinely used in rodent studies. Control mice with functional CB1Rs in the intestinal epithelium displayed large preferences for western diet when compared to chow, with over 90% of total kilocalories consumed from western diet over the testing period. In contrast to controls, preferences for western diet were reduced for up to 12 h in IntCB1-/- mice. These results provide direct evidence that CB1Rs in the murine intestinal epithelium are required for acute preferences for palatable foods. Similar to rodents, humans prefer fatty and sweet foods when given a choice, and their consumption is associated with elevated levels of eCBs in blood. Moreover, levels of eCBs are increased in blood in both human and rodent obesity; however, the impact that circulating eCBs may have on gut-brain function associated with food intake, dietary preferences, and obesity is unknown. Nonetheless, it is plausible that circulating eCBs act as a humoral signal that interacts with cannabinoid receptors along the gut-brain axis to facilitate these processes.Mounting evidence suggests that eCB signaling in the periphery controls food intake by mechanisms that include both indirect and direct interactions with the afferent vagus nerve. We will first review evidence of an indirect mechanism for CB1Rs in the control of gut-brain signaling and its possible dysregulation in diet-induced obesity.Recent studies in mice suggest that CB1Rs in cells lining the small-intestinal epithelium control food intake by blocking nutrient-induced secretion of the satiation peptide, cholecystokinin , which leads to increased caloric intake and meal size under conditions of heightened local eCB tone. Upon arrival of nutrients in the small-intestinal lumen, CCK is released from sub-populations of enteroendocrine cells and controls meal size and satiation by directly activating CCKA receptors on vagal afferent neurons and possibly in the brain.

Immunoreactivity for CB1Rs was found on CCK-containing cells in the upper small-intestinal epithelium in a CCK-reporter mouse that expresses eGFP selectively in these cells [C57BL/6-Tg2Mirn/J]. CCK-eGFP cells were then isolated by fluorescence-activated cell sorting and expression of messenger RNA for components of the eCB system, including CB1Rs , was analyzed. CCK-eGFP-positive cells were enriched with mRNA for CB1Rs when compared to CCK-eGFP-negative cells, which confirms earlier reports of expression of mRNA for CB1Rs in I cells in another CCK-reporter mouse line. We next asked if pharmacological activation of CB1Rs with the general cannabinoid receptor agonist, WIN 55,212-2, impacts nutrient-induced release of the bioactive form of CCK, CCK-8. Circulating levels of CCK-8 were increased within 30-min following oral gavage of corn oil, an effect that was completely reversed by pretreatment with WIN 55,212-2. The inhibitory effects of WIN 55,212-2 on corn oil-induced elevations in CCK-8 in blood were blocked by the peripherally-restricted neutral CB1R antagonist, AM6545, which highlights a role for peripheral CB1Rs in this response. The study described above was performed in lean mice fed a low-fat and low-sugar diet, which express low levels of eCBs in the small-intestinal epithelium. Diet-induced obesity is associated with high levels of eCBs in the small-intestinal epithelium, and pharmacological inhibition of this heightened eCB activity at peripheral CB1Rs blocked overeating resulting from increased meal size and daily caloric intake. These experiments suggest that elevated eCB tone in the small-intestinal epithelium drives the over consumption of high-energy foods and promotes obesity; however, the mechanism in this response were unclear. Therefore, we tested the hypothesis that heightened eCB signaling at CB1Rs in the small-intestinal epithelium in our mouse model of western diet-induced obesity drives overeating by blocking nutrient-induced release of CCK-8. Mice were maintained for 60 days on western diet , which is a time when levels of eCBs are elevated in the intestinal epithelium. Oral gavage of corn oil increased levels of CCK-8 in blood in lean mice with low levels of eCBs in the intestinal epithelium. In contrast to lean mice, corn oil failed to increase levels of CCK-8 in blood in mice fed a western diet for 60 days; however, pretreatment with the peripherally-restricted CB1R antagonist, AM6545, restored the ability for nutrients to increase levels of CCK-8 in blood. These results suggest that under conditions of heightened eCB activity at CB1Rs in the small-intestinal epithelium , CCK-8 release is inhibited, which leads to delayed satiation and overeating. Indeed, inhibition of peripheral CB1Rs with AM6545 in obese mice attenuated overeating associated with increased meal size and total caloric intake. Moreover, the hypophagic effects of AM6545 were reversed by pretreatment with a lowdose of the CCKA receptor antagonist, devazepide, which suggests that acute hypophagic effects AM6545 occurs by a mechanism that includes restoring nutrient-induced satiation signaling. Collectively, these studies indicate a key inhibitory role for CB1Rs in the small intestinal epithelium in nutrient-induced secretion of satiation peptides. Thus, CB1Rs in the intestinal epithelium are thought to indirectly control gut-brain neurotransmission via regulating the release of gut-derived peptides that directly interact with the vagal afferent neurons. Furthermore, these processes become dysregulated in diet-induced obesity, which leads to overeating and possibly obesity. Future studies will be important to elucidate specific intracellular signaling pathways in enteroendocrine cells that link eCB signaling at local CB1Rs with blockade of secretion of satiation peptides, the impact of eCB activity at CB1Rs in the intestinal epitheliumon activity of gastric vagal afferent neurons, and the impact that this signaling has on recruitment of brain circuits associated with food reward.Recent studies suggest that CB1Rs in stomach cells influence alcohol intake and preference in mice by controlling local formation of the bio-active appetite-stimulating hormone, ghrelin, which directly interacts with growth hormone secretagogue receptor on vagal afferent neurons and the brain.

MWH and these differences persisted after adjusting for HIV RNA and CD4 counts

Clinical factors included functional status as indicated by the number of daily activities with decreased independence from the Instrumental Activities of Daily Living questionnaire from the modified version of the Lawton and Brody Activities of Daily Living Questionnaire , reading level based on the Wide Range Achievement Test-4 Reading subtest , self-reported depressive symptoms on the Beck Depression Inventory versions I or II , and diagnosis of lifetime and current major depressive disorder as well as lifetime alcohol, cannabis, or other substance use disorder based on the Composite International Diagnostic Interview using DSM–IV criteria. Biological factors included HIV disease variables such as current CD4+ T-cell count, lowest CD4+ T-cell count ever recorded , plasma HIV viral load, estimated duration of HIV disease, current use of ART, current use of anticholinergic-based medications , Hepatitis C co-infection, and the cardiovascular comorbid conditions of hypertension, hyperlipidemia, and diabetes. All 13 NP tests were used to find groups of similar cognitive profiles within each participant subset and in the total sample using a pipeline that consisted of dimension reduction with Kohonen self-organizing maps followed by clustering to identify profiles based on those reduced dimensions. SOM was implemented using the Kohonen package in R. SOM is an unsupervised machine learning technique used to identify patterns in high-dimensional data by producing a two-dimensional representation consisting of multiple nodes where each node is a group of one or more individuals with similar cognitive profiles and the location of the nodes within the 2-D representation is also a metric of similarity. Unlike probabilistic models,vertical grow rack each individual can only be assigned to one node.

The SOM grid consisted of a 10 × 10 hexagonal grid of nodes and the number of clusters for the final profiling was selected by looping over models created from 3 to 20 clusters and selecting the number that had the best fit based on entropy. Similar nodes were then clustered using the MClust package. MClust is an R Software package used for model-based clustering using finite normal mixture modeling that provides functions for parameter estimation via the Expectation-Maximization algorithm with an assortment of covariance structures which vary in distribution , volumes , shape , and orientation. This program identifies the best model based on entropy. Once the clustering of the nodes was completed, cluster profiles were assigned to the individuals associated with that node. By using SOM and MClust in sequence, we were able to achieve fine-tuned clustering based on patterns of performance in cognitive testing. Factors predicting profile membership between each impaired and unimpaired profile in the overall sample and within each group were explored by creating a predictive Random Forest model using the Caret package in R and then extracting variable importance. RF is an ensemble machine learning model based on classification trees that results in powerful prediction models based on non-linear combinations of subsets of input variables. Prior to model creation, the Synthetic Minority Over-sampling Technique with the DMwR package was used to control for bias due to any imbalance in the number of cases. RF models were created using internal validation using a 10-fold resampling method repeated 5 times. Pre-processing before RF creation involved removing variables as predictors if they had low variance or if they had >50% missing data. Any missing data in the remaining variables was imputed using the Multivariate Imputation by Chained Equations package in R using random forest imputations.

ROC confidence intervals were calculated using the pROC package in R with 2,000 stratified bootstrap replicates. Variable importance of all variables included in the RF models was used as the outcome metric of the predictive power of each variable. Variable importance is a scaled number [0–100] that indicates how important that variable is to the final predicted outcome in that model. For each tree in the RF model, the out-of-bag portion of the data is recorded and repeated after permuting each predictor variable. The difference between the accuracy with and without each variable is averaged over all trees and then normalized by the standard error. For visualization, all variables were plotted by relative variable importance, and attention was given to the top 10 variables in each profile. Variable importance indicates how much that variable contributes to overall prediction accuracy, but as RF is non-linear model it does not indicate directionality. While the analysis pipeline and packages used along with the parameter inputs are stated above, we have added our code into a Supplementary Material to facilitate rigor and reproducibility. In this large-scale study using a novel pipeline combination of machine learning methods, we provide further evidence in support of heterogeneity in cognitive function among PWH. Our results do not negate the heterogeneity in cognitive function in HIV-uninfected individuals but rather highlights the heterogeneity among PWH that can often be masked by a dichotomous HAND categorization. In the total sample, we identified an unimpaired profile, a profile of relatively weak auditory attention and episodic memory, and a global weakness profile. As expected, given the relative sample sizes, the cognitive patterns in the total sample were in greater alignment with those found among MWH compared to WWH. Similar to results in the overall sample, we identified an unimpaired profile and a global weakness profile in MWH; however, unlike the overall sample and inconsistent with hypotheses of domain specific cognitive impairment profiles in both MWH and WWH, MWH demonstrated a profile with relative strengths in attention and processing speed.

Conversely, there were no unimpaired, cognitive strength or global weakness profiles among WWH. Rather, as hypothesized WWH demonstrated cognitive profiles reflecting a global weakness and domain-specific impairment including a weakness in learning and memory and motor skills. These findings suggest that sex and the sociodemographic factors associated with female sex within the HIV-infected population contribute to the heterogeneity in cognitive function among PWH. Studies examining cognitive function in combined samples of men and women may mask important sex differences in cognitive functioning among PWH, particularly in male dominant samples such as the current sample. These sex differences in cognitive profiles among PWH may result from biological sex differences and/or the psychosocial factors that tend to characterize WWH more than MWH. Biological sex differences include those seen in the general population such as sex steroid hormones , female-specific reproductive events and genetic factors or previously-reported sex differences specifically in HIV disease characteristics unmeasured herein. Regardless of the underlying mechanism,cannabis grow racks characterizing these sex differences in cognitive functioning among PWH can provide inroads to identifying mechanisms of cognitive dysfunction and optimizing risk assessments and diagnostic and therapeutic strategies for each sex. A notable sex difference in profiles was the lack of the unimpaired or cognitive strength profile among WWH that was observed among MWH. Our cognitive profile analyses are in line with prior studies that suggests that WWH are often but not always, more likely to demonstrate cognitive deficits than MWH. Our analysis suggests that the impairment manifests more often as domain-specific impairment in women than in men that may not be revealed in a more cross-domain summary measure like GDS or global T scores. This female vulnerability to cognitive deficits is thought to reflect sociodemographic differences whereby low education and socioeconomic status and their associated psychosocial risk factors are more prevalent among WWH vs. MWH. These psychosocial risk factors can have adverse effects on the brain that lower cognitive reserve , suggesting that interventions geared toward addressing these psychosocial factors should be a priority for WWH and/or for women who are at increased risk of HIV. In support of these studies, Sundermann et al. found that the higher rates of cognitive impairment in WWH vs. MWH were eliminated after adjusting for the lower reading level that characterized WWH compared to MWH. Biological differences may also contribute to sex differences in the pattern and magnitude of cognitive impairment in PWH including disease characteristics, brain structure/function, sex steroid hormones and female-specific hormonal milieus. There is also evidence to suggest that WWH may be more cognitively susceptible than MWH to the effects of mental health factors. As mentioned, only women demonstrated more domain specific cognitive profiles including weakness in motor functioning and relative weakness in learning and memory. Similarly, previous studies report that learning, memory, and motor functioning are among the domains in which cognitive impairment is more common among WWH vs.

These sex differences in domain-specific impairment may reflect psychosocial factors , biological factors , or interactions among them. Although women in general demonstrate relative advantages in verbal memory and fine motor function compared men likely due, at-least in part, to the effects of estrogen on the developing brain and the neuroprotective effects of circulating estradiol , the menopause transition has been associated with declines in verbal memory and motor function. The mean age of women in our study was 41 suggesting that a portion of women may be experiencing cognitive deficits associated with reproductive aging. Germane to the learning/memory impairment in WWH, women are more vulnerable to the negative effects of stress hormones on hippocampal-dependent tests compared to men. This finding may be particularly relevant to the current sample considering the high prevalence of psychosocial stressors among WWH including childhood trauma and domestic violence. Unlike MWH, WWH demonstrated a global impairment profile with spared verbal recognition. Consistently, previous findings regarding memory impairment among PWH found this impairment to be more dependent on frontal and subcortical structures with relatively normal memory retention but impaired memory retrieval. Even in the female-specific profile of relative weakness in learning and memory, recognition was less impaired compared to learning and recall. We can only speculate as to why the sparing of recognition in the global impairment profile was specific to WWH and to verbal vs. visual memory. It is possible that, in the context of cognitive impairment in HIV, the female advantage in verbal memory may be most salient for the least cognitively taxing memory component, recognition performance, and this advantage is not fully adjusted for in our demographically corrected T-scores. Despite the heterogeneity in cognitive profiles by sex, the sociodemographic/clinical/biological factors associated with these cognitive profiles were similar for MWH and WWH suggesting that, although the same factors confer increased vulnerability to cognitive dysfunction, the adverse effects of these factors impact brain function differently in men and women. In both MWH and WWH, WRAT-4 had the greatest discriminative value of profile class followed by HIV disease variables , depressive symptoms, age, race/ethnicity and years of education. WRAT-4 scores have been consistently identified as an important determinant of cognitive function among PWH, with lower WRAT-4 scores conferring risk for cognitive impairment. WRAT-4 performance may be particularly salient in this population, given that reading level may reflect education quality, above and beyond years of education, especially in lower socioeconomic populations because of the many factors impacting education quality. Additionally, reading level is associated with health outcomes including hospitalizations and outpatient doctor visits and, thus, may be a proxyfor biopsychosocial factors underlying general health. HIV disease variables were also strong determinants of cognitive profiles in both men and women. Aside from some instances of a shorter duration of HIV disease relating to more cognitive impairment in WWH and in the total sample, the more biologically-based HIV disease variables were associated with cognitive impairment in the expected direction; higher current and nadir CD4 count and lower viral load were protective against cognitive impairment. It is curious that the global weakness with spared verbal recognition profile in women was associated with more severe HIV-related variables yet with shorter duration of HIV infection. We speculate that the shorter HIV infection in WWH may reflect CNS effects of untreated and/or early course HIV infection. Alternatively, the self-reported shorter duration of infection may not have been accurate, to the extent that WWH lived longer with untested/undetected infections. Findings are consistent with a wealth of literature relating proxies of HIV disease burden and severity to cognitive function and suggests that, even in the era of effective ART when viral suppression is common, HIV disease burden can have adverse effects on the brain possibly due to poor penetration of ARTs into the CNS, ART resistance, poor medication adherence , and/or the establishment of viral reservoirs in the CNS reservoir. In line with hypotheses of mental health factors relating to cognitive impairment profiles more strongly in women, current diagnosis of MDD was a predictor of cognitive profiles only among WWH.

Proponents of use testing see both use reduction and harm reduction benefits of testing

In the courts, the harm reduction rationale has generally trumped the use reduction rationale. For example, in Vernonia, the Court held that the importance of deterring drug use among schoolchildren “can hardly be doubted.”But the Court also focused on the harm reduction benefits of use testing: “[I]t must not be lost sight of that this program is directed more narrowly to drug use by school athletes, where the risk of immediate physical harm to the drug user or those with whom he is playing his sport is particularly high.”The D.C. Circuit has ruled that random testing is an unreasonable invasion of employee privacy except for safety-sensitive positions.Based on its reading of three Supreme Court decisions,the Substance Abuse and Mental Health Services Administration has identified four classes of presumptive testing— employees who carry firearms, motor vehicle operators carrying passengers, aviation flight crew members and air traffic controllers, and railroad operating crews— “that are to be included in every plan if such positions exist in the agency.”The National Research Council took a comprehensive look at the evidence for a safety-promoting benefit of drug testing in the workplace. A potential objection to this causal chain model is that drug use might have an additional indirect association with accident risk through some common cause, such as poor self-control skills. The NRC committee noted that any observed link between drug use and accidents or work behavior could be spurious,grow cannabis due to common causation by a third variable. The committee offered this hypothesis: “[D]eviance may be a better explanation than impairment of the links between alcohol and other drug use and undesirable work behavior.

If so, confronting deviant behaviors and attitudes may be a more effective strategy than narrow antidrug programs for both preventing workplace decrements and treating poorly performing workers.”From a prediction standpoint, one might argue that drug tests can serve as a double proxy for both drug use and low self-control. But psychometrically, a better strategy would be to directly assess low self-control and psychomotor functioning, as illustrated in Figure 2. Psychologists and ergonomic specialists have developed a wide variety of valid psychomotor tests, and many are already in use in the military and other “mission-critical” organizations.The private sector has also begun to recognize the potential advantages of directly testing impaired psychomotor performance.There are a variety of psychometrically reliable and valid measures of impulsivity, sensation-seeking,and self-control.More controversially, there are paper-and-pencil “integrity tests” that allow corporations to assess drug and alcohol use, honesty, and other behavioral factors.Psychomotor testing and integrity testing need not replace drug testing; they can complement it. They may be less intrusive81 and, in the case of psychomotor testing, more diagnostic of accidents. Of course, psychomotor testing will pick up impaired performance due to factors other than drug use—alcohol consumption, age, lack of sleep, depression, and so on. Some of these other factors are preventable. Hence, a program of random psychomotor testing may well be an effective deterrent against drug use, but also against alcohol use, sleep deprivation, and other factors that impair safety. At the same time, not everyone who is using a psychoactive drug will show impaired performance on such tests. The NRC notes that “many employees who do work under the influence may be able to compensate for their impairment, and there is a substantial amount of variation across individuals as to how a specific drug at a given dose affects performance.

”As discussed below, this illustrates the tension between the “criminal deviance” and “safety regulation” framings of the problem. It also calls into question the relative importance of the stated motives for use testing: deterring drug use and preventing accidents. A preference for drug testing over psychomotor testing suggests that use testing is really about drug control rather than safety. This is also shown by the fact that drug testing is more common than alcohol testing,even though the link between alcohol and accidents is better established.Granted, it may be easier to consume alcohol without intoxication than cannabis or other drugs.And alcohol is far more prevalent, meaning far more positive test results—though from a safety perspective that is not much of an argument at all. Use testing may also have some unintended consequences. Theoretically, it could encourage users to substitute less detectable intoxicants. In 1995, 20% of worksites tested for illicit drugs but not for alcohol.The most commonly tested substances are the NIDA-5: marijuana, cocaine, PCP, opiates, and amphetamines. Thus, users might shift from the NIDA-5 to other illicit drugs like MDMA and barbiturates, and from illicit drugs to alcohol. I am unaware of studies examining such substitution effects,but these effects have been linked to other policies. There is some evidence that users substitute marijuana for hard drugs when marijuana is decriminalized,and that users substitute marijuana for alcohol when the legal drinking age is raised or beer prices increase.Because marijuana has the longest window of detectability in urinalysis, one might see a shift toward less readily detectable substances like MDMA, amphetamines, and barbiturates.

A related concern is that use testing will drive users away from testing organizations—workplaces, schools, sports teams, and the military. This might make those organizations safer, but it displaces the harm to other settings where use might even escalate. A similar argument is suggested by the “labeling theory” tradition in criminology.Labeling theory predicts that legal controls can actually enhance the likelihood of future offenses if the stigma associated with criminal sanctioning alienates the individual from conventional society. Alienation encourages contact with criminally involved referent groups, and weakens the reputational costs that may restrain deviance— thus creating a self-fulfilling prophecy. Several lines of evidence support this prediction,indoor cannabis grow system but the results are not conclusive. Neither Mehay and Pacula nor Bachman found any evidence linking pastdrug use to self-selection into the military.On the other hand, using the 1994 NHSDA survey, John Hoffmann and Cindy Larison of the National Opinion Research Center found that those using marijuana or cocaine at least weekly were more likely to work for companies that had no testing program.And the NHSDA 1994/1997 workplace analysis suggested that current users were more likely than nonusers to say they might avoid working for an employer who conducts pre-employment screening , random drug testing , or “for cause” testing.Even in its heyday, safety testing of illicit drugs was extremely rare. Between roughly 1972 and 1984, safety testing was done by numerous independent local laboratories run by universities, nonprofits, and health centers.These laboratories tested anonymous samples dropped off at street locations or submitted through the mail. The samples involved a wide variety of alleged substances, including cannabis products, amphetamines, barbiturates, opiates, and various psychedelic drugs. Although they were scattered around the country, relatively few communities had such a center, and the utilization of national services was sparse. For example, PharmChem’s national testing program—the largest such program in the 1970s—analyzed a total of 10,778 samples alleged to be cocaine between 1973 and 1983.97 In 1982, PharmChem’s busiest year of cocaine testing, they received 1385 samples.But there were at least 3 million U.S. cocaine users in 1973, and at least 12 million in 1983.Under the most optimistic assumption that each sample came from a different user, only 0.012% of all users participated in their testing. Even if PharmChem accounted for only 1% of the national market for street testing—almost certainly far below their actual share—that would still imply that only 1% of all users had samples tested that year.

The picture is similar for the late 1990s and early 2000s. DanceSafe is the major source of samples for the EcstasyData.org testing operation.100 EcstasyData.org tested only 1521 samples alleged to be MDMA between 1996 and 2006.To put this in perspective, in 2001 an estimated 3.2 million Americans used MDMA at least once102— 1.7 million of them for the first time.103 In that year, DanceSafe tested only 332 samples, which accounted for at most 0.01% of users, and this time their operation nearly cornered the market. At present, use testing is far more common than safety testing. While safety testing may have an important impact on the lives of those who submit samples, they account for only a negligible fraction of users. Thus, any aggregate impact of safety testing must be due to the diffusion of this information and its use by rave organizers and harm-reduction activists. The low prevalence of safety testing is not difficult to explain. The legal risk to participants is the most obvious factor,but there are others. Volunteering a useable sample means giving away a valuable commodity. And the test results, once publicized, are a public good, and hence subject to free riding by nonparticipants. Another consideration is the high cost. Most of the 1970s testing programs appear to have collapsed due to loss of funding rather than legal intervention, and few users can or will pay the high cost of screening.At an aggregate level, test results would seem to be less accurate for safety testing than for use testing, at least for random use testing. Because safety testing is voluntary, there is no coercion or threat to civil liberty, but the samples are also unlikely to be statistically representative. The direction of any selection bias is hard to identify. Safety testing volunteers may be more cautious, wealthier, or better educated, but their samples may disproportionately represent the results of suspicious transactions and dealers. Still, tests of drugs seized by law enforcement agents often show high levels of impurity, despite a very different set of sampling biases.The remarkably low and variable purity rates in the safety testing data have implications for the interpretation of use testing data, as well other sources of drug indicator data. Typically, use testing targets a specific set of illicit drugs and does not attempt to detect or identify adulterants. Because the samples are not volunteered as drugs or labeled with street names, use testing samples may test negative even when the source was using street drugs. As noted above, such false negatives will occur due to the presence of nontested street drugs, or because the critical sampling periods of the target drugs have passed.But the safety testing data suggest that false negatives will also occur because tested individuals who were trying to use a NIDA-5 drug unwittingly used something else. On the other hand, the Drug Abuse Warning Network annual series,which records emergency room “drug mentions,” may overstate the link between the mentioned drugs and acute health crises, because someone who mentions a drug may have actually consumed something entirely different. To date, I have not been able to locate any empirical study of the effects of safety testing on levels of drug use. This is hardly surprising; safety testing has always been rare and research on safety testing is even rarer.Moreover, safety testing is not intended to influence the prevalence of drug use per se; it is intended to prevent harmful consequences and make users more cautious about their behavior. Still, there are good reasons to consider the question. From a hawkish perspective, one may reasonably ask whether safety testing encourages drug use, either wittingly or unwittingly. But it is possible that consumers infer tacit messages from DanceSafe and related organizations. Psycholinguistic theory and research suggest that people readily draw additional inferences that are pragmatically implied by an actor’s conduct, regardless of whether those inferences were endorsed, or even intended, by the actor.The very way that test results are framed implies that safety testing treats drug use in a less stigmatizing way than use testing does. In safety testing, a positive test is pure, and a negative test denotes failure and contamination. In use testing, it is the positive test that connotes failure; the user is the contaminant. Second, safety testing may encourage use by changing perceptions of risk. At the margin, a harm reduction mechanism might change a person’s assessment of the expected value of taking drugs. If an intervention reduces harm, then at the margin it should increase the attractiveness of the activity for most people. In my earlier treatment of this topic, I reviewed evidence of this mechanism, much of it appearing under the labels “compensatory behavior,” “offsetting behavior,” or “risk homeostasis.”For example, there is strong evidence that people drive faster when they have seat belts and airbags, both in econometric analyses and in controlled experiments on driving test tracks.

Fewer than 13% of those with substance use need received substance use treatment

We defined having a need for mental health treatment by having a positive screen for depressive symptoms or post traumatic stress disorder symptoms or reporting symptoms of other mental health problems, including anxiety, hallucinations, thoughts of suicide, or attempted suicide in the past 6 months. To assess current depressive symptoms, we used the Center for Epidemiologic Studies Depression Scale , considering a score of ≥22 to be evidence of depressive symptoms. We evaluated current PTSD symptoms using the Primary Care PTSD Screen , which asks participants to report whether they experienced any of four symptoms in the previous month due to a past experience: nightmares, avoidance of situations that reminded them of it, hypervigilance, or emotional numbing to their surroundings. We considered a score of four to be consistent with PTSD symptoms. To assess additional mental health problems , we used questions from the National Survey of Homeless Assistance Providers and Clients , as adapted from the Addiction Severity Index  and considered a report of any of those symptoms to be evidence of other mental health problems. We considered anyone who met criteria for depressive symptoms, PTSD symptoms or other mental health problems to have a mental health need.Drawing on Gelberg and Anderson’s model, we examined factors associated with not having received mental health treatment among those with a mental health need. We included the factors listed above,vertical grow system which we identified a priori. In the model with unmet need for mental health services, we examined whether having an alcohol or drug use problem was associated with unmet need, considering them to be need factors.

We conducted a separate analysis to examine factors associated with not having received substance use treatment amongst those with an identified need; we again used the Gelberg and Anderson model and used factors listed above, which we identified factors a priori. In the substance use model, we tested whether having depressive symptoms, PTSD symptoms, or additional mental health problems, conceptualized as need factors, were associated. We used logistic regression in these analyses. To construct our models, we included only hypothesized variables with a bivariate p value of <0.20 in the full multivariate model. To define our reduced model, we conducted backward elimination, retaining independent variables with p ≤.05. Due to a skip pattern error, we incorrectly assessed 33 individuals using the AUDIT. To correct for this, we used multiple imputation to estimate the relationship between the treatment variables and the total AUDIT scores. We conducted multiple imputation analysis in STATA 14.2. We used SAS 9.4 to conduct our descriptive and logistic regression analyses.In a population‐based sample of older adults experiencing homelessness, we found a high prevalence of unmet need for mental health and substance use treatment. While the majority of participants had mental health and substance use problems, few received treatment. One‐third of those with mental health need received mental health care.We identified predisposing and enabling factors associated with unmet treatment need. Adults aged 65 and over had a higher odds of unmet need for mental health treatment. Older adults are more likely to have competing demands, including higher physical health needs, which can interfere with receiving behavioral healthcare. Due to a shortage of geriatric psychiatrists and geriatric mental health care services, older adults may not have access to treatment when they seek care. 

The homeless population age 65 and older is expected to triple by the year 2020. Thus, there is a need to design care that meets the needs of this growing, but underserved, population. We found that having a regular healthcare provider was associated with less unmet need. Having a regular provider can increase engagement because primary care providers may help identify needs and refer to care. In safety‐net systems, such as the ones in which our participants receive care, primary care providers may be the primary source of mental health treatment, by prescribing psychotropic medication. Primary care providers are responsible for an increasing proportion of prescriptions for psychotropic medication. In addition to prescribing medication for mental health conditions, primary care providers can refer patients to outpatient mental health counseling and treatment with specialist staff or providers. In some safety‐net settings, mental health services may be colocated with physical health services via collaborative care models.Collaborative care models can enhance information sharing and treatment plan collaboration and reduce barriers to care. CCMs are effective at reducing depressive symptoms and suicidal ideation among older adults. CCMs are cost‐efficient and can increase the capacity of resource‐constrained settings to provide care for patients with complex needs. Federally Qualified Health Centers can bill for both a medical and mental health visit on the same day , and recent changes to FQHC payment codes allow billing for behavioral health care management services in addition to the FQHC billable visit. Pay‐for‐performance programs link public hospitals’ payments to care coordination and mental health treatment metrics. It is possible that participants in our study were obtaining care in safety‐net primary care settings with CCMs.

Alternatively, the reduced odds of unmet need amongst those who had regular care providers could reflect other factors that we did not measure. For example, having a regular care provider may be a marker for increased system engagement and reduced barriers to any type of care. Those who seek primary care may be more organized, knowledgeable about safety‐net service availability, and have more access to transportation and other enabling resources.. Having a case manager was associated with less mental health and substance use treatment need. In the case management brokerage model, case managers help people navigate care systems and provide a linkage to services. In the clinical case management model, case managers serve as care providers and may provide both mental health and substance use services directly. In some models,mobile grow systems such as intensive case management, case managers provide both brokerage and direct services. It is possible that the association between having a case manager and decreased odds of unmet need for both mental health and substance use services is a result of reverse causality; treatment programs may assign a case manager. We found that participants who first became homeless at age 50 or older had a higher odds of unmet substance use treatment need. Those with late onset homelessness had led more “typical” lives, with a higher likelihood of having been continuously employed and having been married or partnered. They were less likely to have had early onset of substance use problems, thus, they may have developed substance use problems more recently. These individuals may have been less aware of safety‐ net resources in general or resources for substance use treatment in particular. Spending time in jail/prison in the past 6 months was associated with reduced unmet substance use treatment need. It is possible that participants initiated substance use treatment while incarcerated. However, most incarceration settings do not provide adequate treatment services. Alternatively, as a condition of release, participants may have been required to engage in substance use treatment. Our findings indicate there is a lack of community‐based pathways into substance use care. By giving medication‐assisted treatments, such as buprenorphine for opioid use disorder and naltrexone for alcohol use disorder in primary care settings, primary care providers can begin to address this unmet need. However, there is a need for greatly expanded substance use services. Our study has several limitations. We did not use a full psychiatric diagnostic interview. However, screening measures are important empirical tools for the referral of individuals to mental health treatment, especially when integrated care is available. We did not ask participants where they received mental health services, thus we cannot determine whether they received care colocated with primary care, or treatment in mental health specific settings.Due to the success of antiretroviral therapy and an increase in the incidence of HIV infection among older adults, the proportion of older persons living with HIV in the United States is rapidly growing. Therefore, it is important to evaluate physical and emotional health among the changing demographics of PLWH. One of the most prevalent psychiatric conditions among PLWH is major depressive disorder , with PLWH at a two- to seven-fold greater risk for depressive disorders compared to the general population. PLWH have a higher prevalence of both MDD and subsyndromal depression symptomatology than HIV- individuals of the same age or the general population. A multi-site cohort study of over 1500 PLWH found lifetime depressive symptom rates of 63% and across multiple studies diagnosis of lifetime MDD ranges from 22–54% in PLWH, compared to 4.9–17.1% lifetime MDD diagnosis in the general U.S. population. 

These high rates of depression among PLWH represent a major public health concern, as depression has been linked to worse psycho logical and medical outcomes in PLWH, including lower reported quality of life, increased viral load, and a higher likelihood of mortality. Untreated depression in PLWH has also been related to increased cognitive com plaints and worse reported daily functioning compared to PLWH without depression. These medical and psychological factors may be exacerbated in older PLWH who are often burdened to a higher degree with HIV-related medical and psychological factors, in conjunction with aging related problems. Despite the high prevalence rates of depressive disorders among PLWH, depression is often under diagnosed and in adequately treated within this population, though. Given the prevalence of depression among PLWH, it is vital to evaluate other co-occurring factors that may be associated with elevated depressive symptoms. Multiple studies have found an association between higher depressive symptoms and worse quality of life , even after controlling for demographic factors. PLWH with elevated depressive symptoms report lower mental and physical health-related quality of life , supporting the idea that depression affects multiple aspects of quality of life. However, there is a dearth of research regarding the association between depression and positive psychological factors, e.g. resilience, grit, and self-rated successful aging among PLWH. Two studies have found an association between higher resilience and lower depressive symptoms among PLWH. Similarly, in PLWH greater grit has been negatively associated with major depression. In older adult persons without HIV, lower levels of depressive symptoms have been associated with increased self-rated successful aging ; however, few studies have been conducted to evaluate positive psychological factors and quality of life in relation to depressive symptomatology in PLWH compared to control participants. Given there is an increase in the population of older PLWH and that depression is a highly comorbid condition among PLWH, assessing the relationship between depressive symptoms and other psychological factors across different age decades may provide insights for clinical interventions. Therefore, we hypothesized that: 1) PLWH aged 56–65 would have the highest proportion of elevated depressive symptoms compared to HIV- participants; and 2) elevated depressive symptoms would be associated with lower ratings of HRQoL and positive psychological factors across groups, with strongest associations in the oldest PLWH.One hundred twenty-two PLWH and 94 HIV- individuals from the Multi-Dimensional Successful Aging Among HIV-Infected Adults study conducted at the University of California, San Diego HIV Neurobehavioral Research Program and the UCSD Stein Institute for Research on Aging participated in this study. The study was approved by the UCSD Institutional Review Board, and all participants provided written informed consent after the study was explained to them by a trained staff member. In order to enroll a representative cohort of participants, minimal exclusion cri teria were applied and included: 1) neurologic condition other than HIV known to impact cognitive functioning , 2) psychotic disorders , and 3) positive urine tox icology on the day of testing for illicit substances other than cannabis. Inclusion criteria were: 1) aged 36–65 years, 2) fluent in English, and 3) ability to provide informed consent.The present study provides unique findings on the interplay of de pression, HRQoL, and positive psychological factors among middle aged and older PLWH and HIV− individuals in a multi-cohort design structure. In our sample, PLWH were significantly more likely to report elevated depressive scores compared to HIV− individuals. This finding supports prior studies that have found PLWH endorse more depressive symptoms than HIV− individuals. Contrary to our hypothesis, the youngest cohort seemed to drive this finding, with a significantly larger proportion of PLWH reporting elevated depressive symptoms compared to HIV-individuals within this age group. That is, the proportion of elevated depressive symptoms did not differ by HIV status among the middle aged and older age cohorts.