Tag Archives: cannabis grow indoor

Elevated levels of BI may contribute risk for both anxiety disorders and substance use disorders

These high levels of sensitivity to uncertain threat in individuals with alcohol use disorder are also positively associated with self-reported coping motives for use. To further explore BI’s potential risk for, or protection against, substance use, studies have examined BIS and BAS levels on substance use outcomes. These studies have focused on undergraduate populations and yielded mixed results with some studies showing no association between BIS levels and substance use , others showing a positive association between BIS levels with substance use problems , and still others showing a positive association between BIS levels and substance use but only for high BAS levels . Given these conflicting results, Morris et al. used a cross-sectional design to examine whether BIS and BAS were indirectly associated with alcohol problems through coping and conformity motives among undergraduate students. Results indicated that those high in BIS levels were more likely to experience alcohol problems due to greater coping and conformity motives for use. Importantly this finding was independent of levels of BAS and high BAS levels only further strengthened these relationships. Taken together, these results highlight BI’s nuanced pathways for high or low risk for substance use and demonstrate the need to investigate potential additional factors contributing to the relationships between BI and substance use. Ethnicity may be one such important moderator of the relationships between BI, anxiety, and substance use. Hispanic/Latinx youth have consistently displayed increased rates of anxiety symptoms, anxiety disorders, and initial rates of substance use when compared to their non-H/L peers . The greater frequency and intensity for which H/L youth experience threats including increased exposure to crime, community violence, chronic stress, and racial discrimination may heighten levels of BI in H/L youth . In fact, H/L adults have displayed increased attentional biases to threat as compared to non-H/L adults . Cultural values may also further impact BI’s association with anxiety in H/L youth. Schneider and Gudiño showed a positive relationship between BI and anxiety symptoms in H/L adolescents and that this relationship was strongest for those H/L adolescents reporting high levels of Latino cultural values.

More specifically, H/L youth may also experience increased anxiety due to heightened social stigma of mental illness in the H/L community and factors related to collectivist cultural values, immigration, and acculturation that, especially in combination,grow light shelves put H/L youth at increased risk when compared to other racial/ethnic groups . It is possible that the combination of increased exposure to stressors and traumatic experiences as well as the context of heightened social stigma and collectivist cultural values may dissuade H/L youth from utilizing social support as a form of coping with their anxiety. Since high levels of BI may lead to alcohol problems through coping and conformity motives and H/L youth may experience greater exposure to substance use as they have the highest initial rates of substance use, H/L youth high in BI may be uniquely at risk for substance use. Therefore, H/L ethnicity may moderate the relationship between BI and substance use. However, it is presently unclear whether the strengths of the relationships between BI, anxiety, and substance use differ based on H/L ethnicity. Therefore, the present study will prospectively investigate the relationships between BIS scale scores , anxiety, and substance use and whether H/L ethnicity moderates such relationships in youth from the Adolescent Brain Cognitive Development Study at baseline , 1-year follow-up , and 2-year follow-up . Logistic regressions were conducted using the “glm” function in R to evaluate the impact of baseline BIS scores and the interaction between baseline BIS scores and ethnicity on past-year substance use at 1-year follow-up and past-year substance use at 2-year follow-up . Linear regressions were conducted using the “lm” function in R to evaluate the impact of baseline BIS scores and the interaction between baseline BIS scores and ethnicity on 1-year follow-up CBCL DSM-5 anxiety problems T-scores and 2-year follow-up CBCL DSM-5 anxiety problems T-scores. All analyses were conducted in version 4.2.3 of R. While the majority of youth did not report substance use at baseline , 0.50% of H/L youth and 0.42% of non-H/L youth did endorse use at baseline. At 1-year follow-up 0.22% of the sample endorsed any substance use and 0.74% endorsed any substance use at 2-year follow-up.

Baseline past-year use days was dichotomized into no past-year substance use and past-year substance use and included as a covariate. Past-year substance use at baseline was included in both models in which past-year substance use at follow-up was an outcome. Past year substance use at 1-year followup was also included as a dichotomous covariate in the model predicting past-year substance use at 2-year follow-up. To control for their effects on anxiety and substance use, all models included the following covariates: mean baseline BAS scores, race, sex, age, highest parental income, and highest parental education.BI has been shown to concurrently and prospectively predict anxiety, while the association between BI and substance use has been mixed. It is possible that the relationship between BI and substance use varies by social and contextual factors. H/L youth in particular may have stronger relationships between BI, anxiety, and substance use. The present study evaluated the prospective relationships between BIS scores, anxiety, and substance use in youth across 1- and 2- year follow-ups of the ABCD study and whether these relationships differed by H/L ethnicity. Results indicated that baseline levels of BIS scores prospectively and positively predicted anxiety symptoms at both 1- and 2-year follow-ups . The relationship between baseline levels of BIS and follow-up levels of anxiety did not differ by ethnicity. Baseline levels of BIS also prospectively predicted increased likelihood of substance use at 2-year follow-up , but only for H/L youth and not for non-H/L youth. No main effects of, or interactions between, ethnicity and BIS scores were found on substance use at the 1-year follow-up. The results showing that baseline BIS scores prospectively and positively predicted anxiety symptoms across the follow-ups are consistent with prior literature on the relationship between BI and anxiety . While prior studies have shown that H/L youth report higher levels of anxiety than non-H/L youth , the present study did not find any ethnic differences in the strength of the relationship between BI and anxiety. It is possible that ethnic differences in anxiety depend on the measure of anxiety . H/L are more likely to experience and report physiological symptoms of anxiety . The CBCL DSM-5 anxiety problems scales may not best represent H/L youth’s experience of anxiety.

Additionally,other risk factors for anxiety may play a more important role in H/L youth’s experience of anxiety and better explain the ethnic differences in anxiety in youth. For example, individual differences in sensitivity to uncertain threat may be a stronger predictor of anxiety, and particularly for H/L . Results related to the relationship between BIS scores and substance use varied across the follow-up years. The lack of association between BIS scores and likelihood of substance use at 1-year follow-up may be due to the fact that substance use at 1- year follow-up was infrequent and did not greatly increase from baseline. While overall substance use increased at 2-year followup in the sample, BIS scores only predicted increased likelihood of substance us in H/L youth . Similar to the results of Morris et al. , these results were independent of levels of BAS scores. This finding is also consistent with results from Chen and Jacobson showing that H/L youth have the highest initial rates of substance use. H/L youth’s increased exposure to crime, community violence, chronic stress, and racial discrimination may also increase coping and conformity motives which in turn may increase likelihood of substances use. It is possible that high BI, in addition to, or in conjunction with, additional risk factors such as increased access to substances, reduced parental monitoring, and association with deviant peers may uniquely contribute to risk for early use of substances in H/L. Further research is needed to understand whether and how such risk may change as rates of substance use change across development. The present study had several limitations and future directions. While the longitudinal nature of the ABCD study allowed for the investigation of prospective and not just concurrent relationships between BIS scores, anxiety, and substance use, it is possible that the age of the sample at baseline and through the followups is still too early to best capture these relationships. As BI is often first assessed in infancy or early childhood , the strength of the relationships between BI, substance use, and anxiety may vary across development and the lifespan. Relatedly,vertical grow system assessing BI via behavioral observation in infancy or early childhood may yield different results than the self-reported BIS scale scores utilized in the present investigation. Additionally, as rates of substance use increase across adolescence and early adulthood and use trajectories vary between ethnicities , the relationships between BIS scores, ethnicity, and substance use may vary based on the time point in which substance use is measured. These relationships may also vary across H/L youth and could differ based on factors such as time living in the US, social stigma, acculturation, language, nativity, and socioeconomic status . Lastly, the ABCD study sample is not a clinical or treatment seeking sample and utilizing clinical samples may impact the strength of the relationships explored in the present study. Additional prospective studies are needed to understand how BIS scores and ethnicity relate to substance use as use increases in future follow-up years of the ABCD Study.

Additional research is also needed to understand how factors such as trauma exposure, stress, cultural values, discrimination, coping motives, conformity motives, etc. may mediate the relationship between BI and substance use in H/L youth. In conclusion, high levels of BIS prospectively predict increased rates of anxiety symptoms in both H/L and non-H/L youth. However, BIS scores uniquely predict increased likelihood of substance use for H/L youth. Future studies are needed to further understand the mechanisms ‘underlying the relationship between BI and substance use in H/L youth that will provide a scientific basis to better inform prevention and intervention programs for the H/L community. Alcohol consumption accounts for 5.9%, or roughly 3.3 million, deaths globally each year . Although alcohol use alone represents a serious public health concern, high comorbidity rates have been observed at an epidemiological level between alcohol and nicotine use , such that 6.2 million adults in the United States endorsed both an alcohol use disorder and dependence on nicotine . Moreover, an individual is three times more likely to be a smoker if he/she is dependent on alcohol and those who are dependent on nicotine are four times more likely to be dependent on alcohol . Given these statistics, it is evident that heavy drinking smokers comprise a distinct sub-population of substance users that warrant unique investigation. Magnetic resonance imaging studies that have focused specifically on the effects that alcohol use may have on brain morphometry have investigated the relationship between drinking variables, such as lifetime duration of alcohol use or lifetime alcohol intake and brain structure in current alcohol users. For example, Fein et al. found lifetime duration of alcohol use was negatively associated with total cortical gray matter volume in alcohol dependent males, but not in light drinkers. Moreover, findings from Taki et al. suggest a significant negative association between lifetime alcohol intake and gray matter volume reductions in the bilateral middle frontal gyri among non-alcohol dependent Japanese men. A recent study , however, found no significant relationship between lifetime alcohol consumption and gray matter volumes in a sample of 367 non-alcohol dependent individuals. Given these contrasting findings, it is uncertain whether quantity variables, such as lifetime alcohol intake or duration of alcohol use account for many of the gray matter volume reductions observed with continued alcohol use. Various studies have implicated several different regions of gray matter atrophy in alcohol dependent individuals, such as the thalamus, middle frontal gyrus, insula, cerebellum, anterior cingulate cortex , and several prefrontal cortical areas . Due to these heterogeneous results, a meta-analysis was conducted, which concluded that there were significant gray matter decreases in the ACC and left dorsal striatum/insula , right dorsal striatum/insula , and the posterior cingulate cortex in alcohol dependent users relative to healthy controls .

Rates of new infections due to sexual transmission among non-injection drug users are increasing

The results show that the majority of patients with opioid use disorder developed this disorder following the presence of chronic pain. A plausible explanation for some of these cases, although not directly demonstrated by the data collected, is iatrogenic causation via use of opioid medication prescriptions for pain. As hypothesized, this group, as well as the OUD First group and Same Time group, had greater rates of co-occurring psychiatric and medical conditions compared to the No Pain group. Patients with mental health and multiple pain problems often present with more physical and psychological distress, resulting in greater frequency of opioid prescribing in primary care practices . Some unexpected, though not totally surprising, differences emerged between the OUD First and Pain First groups. The OUD First group generally had higher rates of other substance use disorders, commensurate with rates in the No Pain group. This was not unanticipated, as both groups were early-identified addiction patients and may have more genetic and environmental predisposition to developing substance use disorders than did the Pain First Group. The Pain First group had generally higher rates of co-occurring medical problems than did the OUD First group. Part of the explanation for this phenomenon may be related to the age of the Pain First group; that group was older and thus more prone to medical illness. It may also be possible that the Pain First group had a longer duration of pain, which contributed to declining health status. Several limitations should be considered when interpreting the results of this study. This was a study using medical record data. As in any research that uses data from medical records, variation in physician documentation and health insurance requirements may introduce bias in the data that are captured. The clinical data were initially recorded for clinical reasons and not specifically for research purposes, so the accuracy of the data may be less than that collected for research purposes. Further, as in other records-based research,vertical horticulture we do not have the information about patient diagnoses outside the system in our study and therefore are unable to ascertain the new OUD diagnosis except that it’s the first OUD diagnosis in the healthcare system under study.

Participants were predominantly white patients living in the Los Angeles area of the United States, potentially limiting generalizability to patients in other regions. Our findings are dependent on the extent, accuracy, and validity of the data available in the EHR dataset. For example, because OUD diagnosis information was obtained from the EHR, we were not able to distinguish if prescription or nonprescription opioids were used or the route of administration. Both mislabeling of people who do not actually have OUD and under-recognition of true OUD diagnoses could affect the true prevalence of OUD in the sample. Since addiction can be under-recognized in the EHR, it is possible that a subset of patients may not have been identified as having an OUD; thus, there may be some patients in the Pain First group that may actually belong in the OUD First group. Despite these limitations, the study revealed some important findings. As would be expected, the majority of patients in this general healthcare or medical setting were white and with private insurance or the resources to pay for their healthcare, as opposed to being black or members of Hispanic ethnic minorities, and without health insurance, who are more often treated in the public treatment system in Los Angeles. Nevertheless, comorbidities are common among patients in both settings. Somewhat surprising is that the rates of co-occurring chronic pain conditions and mental disorders appear even higher than most rates reported in the literature in connection with OUD, often heroin use disorder, treated in public settings. However, medical conditions among OUD patients treated in publicly funded programs are mostly based on self-report, whereas the present study allowed the delineation of the specific rates of several major co-morbid physical health and other disease diagnoses among OUD patients in a general medical setting. This study demonstrated that regardless of demographic differences, OUD is similarly associated with high morbidity among patients in the private sector as in the public sector, which put them at high risk for mortality. The Pain First group demonstrated the highest rates of physical and mental health problems.

As discussed earlier, opioid prescriptions for pain in some of these individuals could have increased the risk for OUD and related problems. On the other hand, because screening for drug use is not mandated in primary care and some other medical settings, OUD may not be recognized and treated until very late in the addiction course, exacerbating the negative consequences of the disorder. Regardless of the potential causes, expanding training for medical professionals to improve screening, early intervention, support, and monitoring could prevent some of the excess morbidity associated with OUD. Furthermore, implementation of recent CDC guidelines addressing opioid prescribing for chronic non-cancer pain may provide additional risk mitigation in patients with chronic pain prior to their development of OUD . Comorbid OUD and chronic pain complicates treatment decision-making, predicts poor outcomes, and increases healthcare costs . Similarly, studies of healthcare claims data reveal that the most challenging and costliest OUD patients had high rates of preexisting and concurrent medical comorbidities and mental health disorders . The present study reveals the type and extent of comorbidities among OUD patients, results that support improving clinical practice by addressing the complex treatment needs in this population. Finally, studies utilizing the EHR data of patient populations with substance use disorders are important in identifying the scope of the problem and the extent of medical, mental health, and substance use comorbidities that necessitate better models of assessment and coordinated care plans.The human immunodeficiency virus epidemic is shifting away from people who inject drugs , as most new cases of HIV in the U.S. are attributed to unsafe sexual practices. In 2014, sexual contact comprised 94% of new HIV infections in the U.S. . Among PWID, sexual risk behaviors are independently associated with HIV transmission, and may be a larger factor in HIV transmission than injection behavior . Sexual risk behaviors that lead to the transmission of HIV and substance use are intertwined behaviors.

Stimulant use, in particular, is associated with greater sex risk behaviors , including having unprotected sex . Prescription medications, including sedatives and painkillers, are also associated with sexual risk behaviors . Moreover, moderate drinking and having an alcohol dependence diagnosis have been associated with an increased likelihood of having multiple sex partners. Having sex under the influence of drugs and/or alcohol enhances sexual risk behaviors and is more strongly associated with new infections of HIV than is unprotected receptive anal intercourse with a partner of unknown HIV status . Substance use can negatively impact judgment and decision making, leading to sexual risk behaviors , such as trading sex for drugs or money , unprotected sexual intercourse , and unprotected sex with multiple partners . Alcohol users are likely to seek the immediate rewards without considering the long-term consequences while under the influence . It is important to consider the trajectories of substance use and sexual risk behaviors concurrently in order to decrease the transmission of HIV. Substance use disorder treatment, including methadone maintenance programs and outpatient drug free settings, may be an important venue for prevention of sexual transmission. While enrollment in drug treatment reduces drug-related HIV risk behaviors, such as injection drug use ,hydroponic rack system many substance users in treatment continue to engage in sex risk behaviors . As substance use is linked to sexual risk behaviors that can transmit HIV, it is possible that decreases in substance use may coincide with decreases in risk behaviors. Little is known about the temporal relationship between drug and alcohol use severity and high risk sexual behaviors among individuals in substance use treatment . The current study extends past research by examining whether reductions in alcohol and drug use severity predicted reductions in sexual risk behaviors among men in SUD treatment who were followed for a six month period. We hypothesized that decreases in drug and alcohol use at follow-up would coincide with decreases in sex risk behaviors. Participants were enrolled in a multi-site clinical trial of the National Institute on Drug Abuse Clinical Trials Network designed to test an experimental risk-reduction intervention, Real Men Are Safe , a five-session intervention that included motivation enhancement exercises and skills training, against a standard one-session HIV education intervention that taught HIV prevention skills. The intervention was delivered by counselors in SUD treatment programs, and approved by the local Institutional Review Boards. Details about this study have been published in greater detail elsewhere . In the parent study, participation was restricted to men in SUD treatment, who were at least 18 years of age, reported engaging in unprotected vaginal or anal intercourse during the prior six months, were willing to be randomly assigned to one of two interventions and complete study assessments, and were able to speak and understand English. HIV status was not assessed as part of this study. Exclusion criteria included gross mental status impairment, which was defined as severe distractibility, incoherence or retardation as measured by the Mini Mental Status Exam or clinician assessment, or having a primary sexual partner who was intending to become pregnant over the course of the trial.

All participants enrolled from methadone maintenance needed to be stabilized in treatment for at least 30 days to ensure the greatest likelihood that they had achieved a stable dose of methadone before starting the intervention groups. Participants were examined prior to receiving the clinical intervention and six months following the intervention. All participants provided informed consent prior to participating.Participants were recruited from seven methadone maintenance and seven outpatient drug free treatment programs in the U.S. that are affiliated with the CTN to participate in a research study on HIV risk reduction interventions. These modalities were chosen as the program’s counselors were trained to deliver the intervention. The treatment programs represented different geographic regions, population density, and HIV prevalence rates. Programs were located in U.S. states that included California, Connecticut, Kentucky, New Mexico, New York, North Carolina, Ohio, Pennsylvania, South Carolina, Washington, and West Virginia; they treated patients in urban , suburban , and rural areas . Recruitment was accomplished through posters and fliers posted in clinic waiting rooms, announcements about the study to clinic patients at group therapy meetings, directly through a participant’s individual counselor, and at clinic “open houses” designed to introduce the study to clinic patients. Most participants from the drug-free outpatient clinics were recruited close to treatment entry, to reduce the possibility of early dropout. Assessments were conducted at baseline, prior to randomization, and six months after. Alcohol and drug use severity were assessed with the Addiction Severity Index-Lite , a standardized clinical interview that provides problem severity profiles in seven domains of functioning by providing an overview of problems related to substance use, in addition to days of use . This instrument has been used in many studies of drug and alcohol abusing populations and its reliability and validity are well-established . Composite scores for each problem domain are derived ranging from zero to one, with higher scores representing greater need for treatment. For the purposes of this study, only the composite scores for the alcohol and drug domains were analyzed. These composite scores are calculated based on the number of days of recent drug and alcohol use , problems arising from this use, and the desire for seeking treatment. We also provided days of recent use of alcohol to intoxication, cannabis, heroin, cocaine, sedatives/hypnotics/tranquilizers, and other opiates.In bivariate analysis, we compared sex risk behaviors, recent substance use, and ASI drug and alcohol composite scores at baseline and follow-up to monitor changes over time. As the ASI drug and alcohol composite scores did not meet the conditions of normality, we used Mann-Whitney U tests and Spearman correlations. Next, we compared sex risk behaviors and ASI composite scores at baseline and at six month-follow-up. Wilcoxon signed-rank tests were used for continuous data and categorical variables with more than two levels and McNemar’s tests were used for dichotomous categorical data . Multinomial multi-variable logistic regression analysis was used to test the hypothesis that reductions in ASI alcohol and drug use severity composite scores would predict reductions in sexual risk behaviors.

A short description of the activity in question was included to help faculty decide on the point values assigned

During the first year , the requirements included only conference and module participation. The residency assessment requirement was subsequently enacted in the following year . Table 1 lists the final baseline education expectations required of faculty members. Before employing these education requirements, all faculty members were notified of the consequences of not fulfilling expectations, which included ineligibility for any academic incentive and an inability to participate in the voluntary ARVU system.In May 2018, stage two began, which involved the creation of an ARVU system to encompass all other academic activities. It was decided that the ARVU system would be voluntary, but to participate the baseline education expectations outlined in stage one had to be fulfilled. For the first step of this stage, the vice chair for education created a list of preliminary activities to be included in the ARVU system, such as teaching, lecturing, publications, grants, committee memberships, and leadership positions. These additional activities were ones in which faculty were already participating that aligned with the academic mission of the department, but had not been captured within the baseline education expectations, did not earn clinical hours reduction from the department or institution, or were not an implicit part of a faculty member’s role based on his or her leadership position. The thought was that activities that earned a clinical reduction in hours were already being financially rewarded, and this system was designed to recognize activities not yet distinguished. An example includes fellowship activities, which were not included because fellowship directors have a reduction in clinical hours to support their leadership role. After the initial list was assembled, it was shared with a select group of 11 leaders within the department, including residency leadership, undergraduate medical education leadership, fellowship directors, the research division, and the pediatric emergency medicine division. The participants were selected due to their various leadership roles in the department,cannabis drying system their dedication to scholarly achievement in their own careers, and the high priority they placed on these activities within their respective divisions.

These qualifications placed these faculty members in a prime position to help generate a comprehensive list of activities relevant to each division. After multiple discussions and written communications using a modified Delphi method, the group reached consensus on the activities that were to be included. The unique part of this project was the third step, which included a survey that was created and analyzed using Qualtrics software and distributed to a group of 60 faculty members across the department. These faculty members were chosen out of a total of 123 because they were identified as department members who regularly participated in the activity list created by the leadership group. Because these faculty members were the most active in these activities, they were in the best position to review the list and evaluate each activity to its fullest. Furthermore, because it was decided that the ARVU system would be voluntary, they were deemed the faculty most likely to be invested in and use this new system. Finally, one of the goals of this mission was to get faculty buy-in as they were the most important stakeholders in this endeavor, and this was achieved by allowing them a voice and to feel empowered in the final steps of this project. The survey included all agreed-upon activities and asked faculty to rate each on a scale from one to four . The 11 faculty members who contributed to the final list of activities created these descriptions. Effort was defined by the time needed to commit or prepare for a particular activity, the ongoing effort needed to sustain the activity if it involved a longer commitment than just one session, and whether the activity required a passive presence or more active participation. For example, activities that required a sustained effort included such things as grant involvement, committee membership, or a leadership position. As expected, some subjectivity was involved in the voting for various reasons, such as the activity being one in which the responsive faculty member participated in himself or herself, or differing opinions regarding how much preparation time might be needed for such things as a lecture.

To help reduce this bias, the survey was sent to many faculty members with different roles and responsibilities to obtain a consensus and to dilute idiosyncratic points of view. Furthermore, the knowledge of and dedication to each activity that the chosen faculty members had and the descriptions provided helped to further reduce bias in the points system. The survey also included free-text fields where faculty could input additional activities that they felt should be added to the list. Of the 60 faculty members surveyed, 49 responded and completed the survey in its entirety. The activities, ranked from highest to lowest based on the mean score including standard deviations, are presented in Table 2. The standard deviation was less than one for all activities included in the survey. The mean of each activity was translated into final points to be awarded in the ARVU system. Activities with higher means earned more points. Any activities that were similar in description and mean score were assigned the same number of final points. We introduced the final list and point system at a faculty meeting prior to implementing, and after this final feedback round, we launched the system in December 2018. The free-text responses were also reviewed, and these activities were added to the list and also voted on by the faculty group to create the final list with points. The next steps for the project included creating a database where faculty could log their completed activities. We created a Google form that listed all activities in the ARVU system where faculty members could select the activity in which they participated . Each activity had an associated drop down menu that asked for additional information, such as title, date, location, description, proof of activity, and an ability to upload documents. We then created a dashboard in the analytics platform Tableau , containing all activities. Statistics for the baseline educational expectations automatically loaded into the dashboard and could not be edited by faculty members. The ARVU activities logged into the Google form also fed directly into the dashboard for display. The full dashboard displayed each faculty member’s baseline education expectations, whether they had met requirements, the activities that they had entered into the ARVU point system, and total points earned to date . Final points were earned after academic leadership reviewed, approved, and signed off on each submitted activity.

Each month,cannabis vertical farming the system automatically e-mailed a link to each individual’s dashboard notifying faculty how many points they had earned to date and of any participation deficiencies. The medical school requires a teaching portfolio for faculty seeking promotion on the scholar track. This portfolio requires faculty to document their achievements in the following categories: teaching effort, mentoring and advising, administration and leadership, committees, and teaching awards. All ARVU activities were reviewed and categorized based on the elements of the teaching portfolio. These activities not only show up as itemized items with points, but they are also grouped into the appropriate portfolio category and are displayed on each individual faculty member’s dashboard. This allowed each faculty member to see how much scholarship they had completed within each of the teaching portfolio categories and in which areas they were lacking that deserved more attention. This provided faculty with a readily accessible repository of activities that could be transferred directly into the correct category of their teaching portfolio, facilitating tracking of activities upon which one needed to focus for promotion. A total of 123 faculty members were expected to participate in the baseline education expectations. At the end of the academic year in June 2018, 107 faculty had met requirements. Failure was defined as not attending the required number of conferences per year or not participating in the module system. Of the 16 who did not meet expectations, 94% signed up for conference modules to participate in specific activities, but none of them met overall required conference attendance. Of the deficient faculty, five worked full time at 28 or fewer hours, 10 were full time at more than 28 hours, and one was part time. Those who did not meet education expectations were notified and had their year-end AY 2017-18 financial incentive reduced to reflect this deficiency. We compared an individual faculty member’s conference attendance in AY 2016-17 and AY 2017-18 to determine any changes after implementing the new expectations. Overall, faculty attended 21% more conference days after expectations were implemented compared to the prior year. Preliminary data for the following AY 2018-19 reveals that conference attendance increased by 15% . The number of resident assessments completed in AY 2017-18 among all faculty was 2837 compared to preliminary AY 2018- 19 assessments of 4049, resulting in a 30% increase since expectations went into effect. To date, faculty across the department have logged a total of 1240 academic activities in the database. The distribution of points across categories is highlighted in Table 3 with most points earned through teaching activities at the medical school or through other scholarly work that doesn’t necessarily fit into the other categories of the teaching portfolio. Leadership will review each faculty member’s individual records to determine if they have met baseline education expectations.

The faculty who meet expectations will receive the set baseline incentive and have the potential to earn more financial incentive based on the number of points they have earned in the ARVU system. Once all the data is analyzed, the points will be converted into financial bonus amounts based on the number of faculty who are eligible and the amount of funds available.This project has resulted in preliminary positive effects on both education and documentation of scholarly work within our department. The first stage resulted in an overall increase in conference attendance and participation even prior to implementing the ARVU system. It is possible that these positive findings were a result of the academic incentive being dependent on meeting education expectations. However, in offline discussions with multiple faculty members, it appears that there was a shame factor that also contributed to improved attendance. Multiple faculty expressed their relief that many were being called out on their low attendance and participation and that faculty who had historically carried much of the teaching responsibility were now being recognized. In the same vein, resident assessments increased in the second year by a considerable amount, without any other changes being made to the system, and therefore were likely a result of the new expectations. The increase in assessments does not necessarily mean better quality, and this will need to be evaluated going forward to determine full impact. The improved participation in educational activities as a result of financial incentives or other measures is consistent with reports from other institutions and existing literature. There is a clear correlation between faculty documentation of scholarly output and the ARVU system, as there was no system in place prior that allowed tracking of activities. The increase in activities and documentation will need to be followed from year to year to draw conclusions on overall scholarly activity among individual faculty members and throughout the department. Unlike previous literature describing ARVU systems, our project has emphasized the ability to house activities in one place that can be transferred into a faculty member’s teaching portfolio, thereby further incentivizing the use of this system outside of financial rewards. We will continue to track baseline education expectations and the ARVU system across the department as well as continuously seek feedback from faculty and make changes as needed. This process will continue to be refined over time based on faculty feedback and departmental and institutional priorities. The majority of faculty who did not qualify for the academic bonus last year worked more than 28 clinical hours per week, and thus time issues may have affected compliance. To further probe this finding and facilitate educational commitments, we will solicit additional feedback from this group of faculty members to explore participation barriers that may be addressed in the future. We hope to follow the scholarly output of the department over time using the ARVU system as an estimate of faculty productivity.

Extension of critical signal functions for time-dependent conditions should be considered in selected basic-level facilities

Two hospitals could insert central venous catheters and gain intraosseous access, which is important in shock management. In terms of resources, only two of the four had a separate triage area for emergency patients. All four hospitals had an isolation room, an obstetric/gynecologic area, and a decontamination room. We surveyed hospitals on their reasons for non-compliance with signal functions, asking them to choose from among five possible causal factors. The first was training issues, taking the form of a lack of education. The second factor was related to the lack of availability of appropriate supplies, equipment, and/ or drugs. The third pertained to management issues, such as the staff being unfamiliar with the functions, and cases where other equivalent procedures could have handled the conditions. The fourth factor was policy issues, referring to cases where the government or the facility itself does not allow for compliance with the signal functions. The fifth factor was designated as “no indication,” meaning that there was no patient group who needed this function. Supplemental Table 4 describes the reasons respondents provided on the survey for each unavailable signal function. Inappropriate supplies/equipment/drugs was the most common reason, as might be expected, and shortage of human resources was another causal factor. One intermediate hospital did not agree with the use of emergency signal functions for sentinel conditions, and answered “no indication” as their reason for non-compliance.It is widely recognized that there is a huge burden caused by trauma and non-communicable diseases in LMICs, where capability for emergency care is believed to be suboptimal.Many studies have tried to assess the state of emergency care in the health facilities of LMICs. Due to the accessibility issue,vertical farming racks most studies examined teaching hospitals located in urban areas. Assessment tools were not standardized and were usually developed by the researchers themselves.

Domains for assessment were usually related to the availability of resources, and functional aspects were surveyed with qualitative measures, if any. To our knowledge, this study is the first to survey urban and rural Myanmar hospitals using ECAT, the newly developed objective tool for assessing emergency care in health facilities. Our study demonstrated that the performance of emergency signal functions in Myanmar hospitals is inadequate, especially in trauma care. Trauma care in LMICs has been regarded as a role for large hospitals, and direct referral to upper-level facilities is a common practice. Burke et al. found that lack of readily accessible equipment for trauma care and shortage of skilled staff were the main reasons for poor quality trauma care in lower-level health facilities in LMICs.Another study pointed out the limited training opportunities for trauma management in LMICs.We found similar obstacles to trauma care in Myanmar hospitals, including the unavailability of items necessary for signal functions. Unlike other LMICs, Myanmar faces a singular geographic and demographic situation. Road conditions are poor. Almost 20 million people live in areas not connected by basic roads. The roads that do exist are unpaved and narrow, contributing to the overall lack of accessibility. The cause of this problem might be found in continuous armed conflicts. Since the independence of Myanmar in 1948, a continuing civil war has devastated the population and infrastructure of the rural areas, which has led to the deterioration of the health status of the country. In areas dominated by violence, residential zones are located away from road access, and the level of medical care is behind the times. Financial support is also lacking.For example, a referral and transport from Matupi Hospital to an adjacent upper-level facility takes as long as 16 hours during rainy seasons due to road damage . In this situation, timely management of patients in a critical condition is virtually impossible, and demands for higher levels of emergency care in basic-level facilities can be raised. Moreover, the results of our study show that some intermediate-level hospitals could not provide resuscitation for critical patients due to the lack of advanced airway management, mechanical ventilators, and defibrillation. Imbalances in the quality of emergency care in both basic- and intermediate-level facilities should be addressed carefully.

However, in Myanmar’s special situation where highway infrastructure is lacking and there are problems with long transport times, the ability to administer emergency medical care at a large hospital should be established based on skilled labor and resources. Ouma et al. emphasized that all countries should reach the international benchmark of more than 80% of their populations living within a two-hour travel time to the nearest hospital.20 Although it cannot be realized in the near future, measures to alleviate accessibility problems can be applied. Thorough gap analyses to address existing challenges in remote regions will be helpful for planning. In this regard, ECAT should be validated to include a time factor, such as the referral time to the nearest upper-level facility. We identified the following urgent issues in need of remediation: 1) improvement of trauma-related signal functions in basic-level facilities; 2) improvement of trauma and critical care-related signal functions in intermediate level facilities; and 3) implementation of a comprehensive nationwide survey to uncover emergency care deficiencies in rural areas, with emphasis on the time required for referral to higher-level facilities. Our suggestions to address the issues identified in our study can be summarized as relating to the reinforcement of infrastructure and human resources within each level of facility. In addition, prehospital care and care during inter-facility transportation should receive special attention considering the unique context of Myanmar, with its dispersed residences and extremely long transport times. There has been an effort to establish formal EM in Myanmar. In 2014, the Emergency Medicine Postgraduate Diploma course provided by Australia graduated Myanmar medical officers.8 These emergency providers will be an imperative asset to setting up a modern emergency medical care delivery system in Myanmar, although most of them will practice in advanced-level facilities. Measures to build the capacity to respond to medical emergencies in rural areas should be pursued in Myanmar. There have already been efforts to improve first-aid skills among local healthcare workers who have a high degree of understanding of the local context, and to employ them as community emergency responders.

These local healthcare workers are well informed about the population, hygiene, disease distribution, and the geographical and cultural characteristics of the area; thus, they are able to provide essential first aid and find appropriate health facilities for referrals. This practice has been expanded to the concept of out-of-hospital emergency care . It refers to a wide range of emergency treatments, from the process of recognizing an emergent care situation, to the initial emergency treatments outside the hospital, and transport to the hospital.The establishment of OHEC has played a role particularly in LMICs by reducing mortality rates by 80%, especially in trauma cases.Since 2000, several organizations have implemented the trauma training course program with non-physician clinicians in Eastern Myanmar.The program comprises various skills for carrying out the initial treatment of trauma, taught through simple simulations and feedback. The findings indicated that survival rates improved significantly among major trauma patients following the implementation of this program. We recognize that some skills covered in the TTC, such as surgical airway management, would be relatively dangerous for health workers to perform in the field, and believe that development and implementation of a training program focused on the operation of emergency signal functions would be more practical for the rural context. Those who are trained in this program could act as prehospital emergency care providers, and also aid basic-level facilities to fill the functional gaps identified in this study. In addition to the above suggestions, a national or provincial strategic plan for reinforcing emergency care in rural areas of Myanmar should be established and implemented. Following a thorough investigational survey,vertical racking system essential resources for each level of health facility should be supplemented. Public education to recognize emergency conditions is another area to be strengthened. In many LMICs, including Myanmar, folk remedies are still commonly attempted before people seek medical attention, especially in the field of obstetrics and gynecology.Recognizing the need for emergency care is crucial because it is the first step leading the patient to the emergency medical care system. Community education should play an important role in preventing delays in the detection of emergency situations.Traditional medicine providers have been the first to participate in this training thus far, and it has been reported to be effective.Point-of-care ultrasound has emerged as an essential diagnostic tool in emergency medicine .Several studies have demonstrated that a structured curriculum is both feasible and effective in training emergency physicians to obtain and accurately interpret images with test characteristics approaching or even exceeding those of dedicated radiology-performed scans.However, less is known about the penetrance of POCUS into daily EP practice.

The emergency department poses unique challenges to implementation of diagnostic POCUS not present in other specialties with broad adoption of POCUS such as cardiology, critical care, and obstetrics: 1) the time spent with an individual patient is limited compared to other specialties; 2) ED settings vary dramatically between academic, community, rural, and urban practices, and each environment has its own unique challenges with respect to availability of POCUS and training of clinicians in ultrasound and 3) the breadth of POCUS applications in the ED is considerably greater than in other specialties. Guidelines from the American College of Emergency Physicians endorse 12 core applications. The degree of experience necessary to obtain competency in image acquisition and interpretation, while not clear, appears to be highly variable between these applications.As a result, few EPs maintain competency in all 12 applications without further postgraduate fellowship training. This leads to a general reluctance to perform and rely on some POCUS exams, as EPs question the need to maintain competency in certain applications.9 Indeed, a survey of EPs in California found that most EPs do not use POCUS, and that EPs in academic environments use POCUS more regularly than their community counterparts.The challenges posed above apply both to established EPs and residents in training who are establishing practice patterns. Despite near-universal incorporation of ultrasound into resident training,a survey of recent residency graduates found limited use in daily clinical practice.This suggests that dedicated ultrasound training in most EM residency programs in North America progresses residents to the intermediate level, where they are able to effectively acquire and interpret images, but not to the level of the expert who is able to seamlessly incorporate the procedural skill into practice. We hypothesized that a number of perceived barriers may be leading to a gap in deliberate, on-shift practice, which is preventing trainees from advancing to expert levels. The goal of this study was to assess and address relevant barriers to POCUS performed on shift by residents at a single, three-year EM residency program. As such, the study had two phases. We first performed a voluntary residency-wide survey to address perceived attitudes and barriers to on-shift use of POCUS. Next we performed an intervention to address the primary barrier, namely the perceived lack of a proper charting and reporting policy.We conducted the study at an ED with an annual volume of 65,000 patients, which hosts a three-year EM residency program. The residency trains a total of 36 residents, with 12 residents per year. The study site uses the HealthLink/ EPIC electronic medical record , and all point-of-care ultrasounds are wirelessly uploaded to a middleware product . Quality assurance of all scans submitted for review is performed by ultrasound fellowship-trained EPs who rotate on a weekly basis. At the time of study performance, ultrasound training consisted of a four-hour introductory ultrasound course at the start of residency training, a four-week mandatory ultrasound rotation during the first year, and quarterly didactics with simulation and hands-on training during regularly scheduled mandatory conference. In addition, ultrasound fellowship trained faculty offered three-hour sessions, biweekly, which consisted of didactics, image review, and bedside scanning. These sessions were mandatory for the first-year resident who was on the dedicated POCUS rotation, as well as two second and third-year residents who were on a dedicated month of community ED practice. The study was performed as part of ongoing quality improvement program, not requiring institutional review board review at the study institution.At the beginning of the study, a departmental best practice, systematic, ultrasound documentation workflow was disseminated to faculty attending physicians. This workflow included saving ultrasound examinations performed or supervised by a faculty member credentialed in the relevant application.

Acute heart failure is a gradual or rapid decompensation in heart failure requiring urgent management

For each threshold number of visits in the preceding six months, the unadjusted risk of return visit was more than double among frequent visitors as compared to non-frequent visitors. The remainder of the analysis uses two or more previous visits as the threshold defining frequent visitor, unless otherwise specified. This retrospective analysis of almost seven million patient visits found that recent previous ED visits was the strongest predictor of an ED return visit. This finding held true across multiple cutoffs defining frequent use, and also under both univariate analysis and a multivariate model including patient, visit, hospital, and county characteristics. Along with recent frequent use, public insurance and three diagnoses were associated with an increased risk of a return visit. This suggests that our understanding of short-term revisits could be informed by considering frequency of ED use. A parallel thread in the literature has investigated frequent users and interventions designed to decrease ED use.Previous studies have evaluated predictors of ED revisit using patient-level data such as age, sex, race, insurance status, and diagnosis at initial ED visit, as well as hospital-level data. Surprisingly, the relationship between frequent ED use and risk of revisit after discharge is poorly characterized.Further, there is no consensus on what defines “frequent,” with definitions ranging from 2–12 visits per year.We had the striking finding that even one previous visit increased risk of return by a clinically-significant margin. This finding held true even when accounting for patient, visit, hospital, and community characteristics. Our definition focused on visits within the previous six months because other work has shown that episodes of frequent ED use are usually self-limited,4×4 grow tray which suggests that the recent past is more relevant to current health and risk of short-term return visit.

A second, related finding is that the threshold used to define frequent visitors is arbitrary with respect to risk of return visit. In the hope of informing the wide range in the literature on the number of visits or length of time used to define frequent users, we considered our definition of frequent user in relation to risk of return visit. We had the surp finding that any number of previous visits used to define frequent vs non-frequent ED users predicted an increased risk of revisit. Given that the reason to label certain patients as frequent visitors is often in order to identify them for interventions, future work may consider an outcome-based definition of frequent users and define the term “frequent” with a qualifier – eg, with respect to propensity to revisit after a visit, risk of becoming a persistent frequent user, or risk of death. As with existing literature, we transformed the number of previous visits from a continuous variable to a binary one. This has the disadvantage of losing some information, but is standard in the literature regarding frequent ED use, and can easily be applied in the midst of clinical practice.Our sensitivity analysis demonstrated that any threshold was significantly associated with return visits, suggesting that knowing whether a patient had four vs three previous visits would provide marginally more information than simply knowing the patient had more than two previous ED visits. As with the definition of frequent user, the time to return visit defining a return visit is somewhat arbitrary. While the risk of return visit is highest on the first day following the ED visit, the risk gradually decreases and, as found previously by Rising et al., there is no clear timeline that defines a return visit.This finding may suggest something other than inadequate care at the index visit is the driving factor for most short-term revisits, and that both frequent use and revisits may simply be proxies for certain patients with increased healthcare-seeking behavior. Further complicating this issue is that patients may be instructed to return to the ED for a re-evaluation.

Thus, an ED in a setting with limited outpatient resources might appear to give poor care as measured by revisits when in fact it serves to provide followup care that patients otherwise would not obtain. Despite the variation in the literature and thus our broad range of models, we consistently found that the strongest predictor of a revisit is a high number of previous visits. This finding held true in our sensitivity analysis using different thresholds for number of previous visits and also days after index visit. The observation that previous visits predicts future visits may seem obvious or mechanical, but it does not necessarily follow that a patient with one or two visits in the prior six months would be at double the risk of a revisit within three days. Further, that this relationship was stronger than any other patient, hospital, or community characteristic is an important finding that has been overlooked in the literature regarding revisits. In fact, it appears that the literature on frequent visitors and the literature regarding revisits have to this point largely functioned in parallel and have not yet begun to inform each other. Whether frequent users are merely frequently-ill people, and whether sicker patients are at increased risk of short-term revisits deserves future research. Likewise, future work should investigate the extent to which patients are frequent users because they received poor care or face limitations in their ability to obtain outpatient resources, the extent to which revisits are avoidable, and the degree to which frequent use persists over time. Understanding the extent to which follow-up with primary care, referrals to specialists, and ability to obtain further evaluation such as advanced imaging, cardiac stress test, or even a wound check is essential to understanding why patients return to the ED. The data for this study were obtained from a single multi-state physician partnership and do not necessarily generalize to other providers or provider groups, or to other populations. However, the sample size was large and spans many cities and rural areas across several states, includes a broad set of hospital owner types, a large range of hospital sizes, and both teaching and non-teaching hospitals. This source of data may lead to a biased sample with respect to patient population, hospital characteristics, and provider characteristics. In particular, the income distribution is narrower than the distribution for the entire U.S., so the patient population could have a lower proportion of low- and high income patients than typical for the U.S.

We addressed these potential sources of bias by controlling for patient demographics, patient insurance, and local income; hospital characteristics including volume and a performance metric, and clinician degree. Second, because not all hospitals within a region were observed, measures of frequent visitors and repeat visits may underestimate the actual numbers of frequent visitors and repeat visits, as patients may have gone to another ED either prior to or after the observed index visit. This limitation is typical of this research,and in this dataset patients were linked across hospitals, although this was limited to the hospitals served by this company. Thus, it is unknown whether patients had an unobserved revisit at another ED, or whether what was considered an index visit actually represented a revisit after an initial visit at another ED. Next, we were unable to distinguish between planned and unplanned return visits. Thus, a patient who is instructed to return for a check over the weekend to ensure their illness is improving, for example, would appear to be a revisit, but this should not imply that their initial treatment was inadequate or inappropriate in any way. Research using administrative datasets, such as HCUP, likewise suffers from this limitation. Finally, as with related research, this study does not identify the extent to which high rates of frequent visits and revisits are driven by patient factors, ED care, or non-ED healthcare resources. This analysis was limited in its ability to examine patient psychosocial attributes or local resources, which are likely to contribute to ED visits and revisits, although we did consider proxies for access to care: patient insurance and community-level factors such as income and number of hospitals in the county. The condition covers a large spectrum of disease, ranging from mild exacerbations with gradual increases in edema to cardiogenic shock. HF affects close to six million people in the United States and increases in prevalence with age.6-11 Currently, the emergency department initiates the evaluation and treatment of over 80% of patients with AHF in the U.S.As the population ages, increasing numbers of patients with HF will present to the ED for evaluation and management. However,greenhouse racking making the correct diagnosis can be challenging due to the broad differential diagnosis associated with presenting symptoms and variations in patient presentations. Over one million patients are admitted for HF in the U.S. and Europe annually.In the U.S. population, people have a 20% risk of developing HF by 40 years of age.HF is more common in males until the age of 65, at which time Brooke Army Medical Center, Department of Emergency Medicine, Fort Sam Houston, Texas University of Texas Southwestern Medical Center, Department of Emergency Medicine, Dallas, Texas Rush University Medical Center, Department of Emergency Medicine, Chicago, Illinois males and females are equally affected.Patients with HF average at least two hospital admissions per year.

Among patients who are admitted with AHF, over 80% have a prior history of HF, referred to as decompensated heart failure.De novo HF is marked by no previous history of HF combined with symptom appearance after an acute event.Mortality in patients with HF can be severe, with up to half of all patients dying within five years of disease diagnosis.Other studies have found that post-hospitalization mortality rates at 30 days, one year, and five years are 10.4%, 22%, and 42.3%, respectively.AHF expenditures approach $39 billion per year, which is expected to almost double by 2030.Normal cardiac physiology is dependent on appropriately functioning ventricular contraction, ventricular wall structural integrity, and valvular competence.At normal functional status, a person’s stroke volume is approximately one milliliter per kilogram for every heartbeat.SV is dependent upon the preload , after load , and contractility . In patients with HF, left ventricular dysfunction can be due to impaired LV contraction and ejection , impaired relaxation and filling , or a combination of both.An alternate way of defining this would be by the effect on ejection fraction . HF with preserved EF refers to patients with an EF > 50%, while HF with reduced EF refers to patients with an EF < 40%. Borderline preserved EF is defined by HF with an EF of 41-50%.The most common form is HF with reduced EF, which is primarily related to a decrease in the functional myocardium .Additional causes include excessive pressure overload from hypertension, valvular incompetence, and cardiotoxic medications. HF with preserved EF occurs due to impaired ventricle relaxation and filling, which accounts for 30-45% of all HF cases. This form of HF results in increased end-systolic and diastolic volumes and pressures and is most commonly associated with chronic hypertension, coronary artery disease, diabetes mellitus, cardiomyopathy, and valvular disease. Both systolic and diastolic HF can present with similar symptoms due to elevated, left-sided intracardiac pressures and pulmonary congestion.Right ventricular failure most commonly results from LV failure. As the right side of the heart fails, increased pressure in the vena caval system elevates pressure in the venous system of the gastrointestinal tract, liver, and extremities, resulting in edema, jugular venous distension, hepatomegaly, bloating, abdominal pain, and nausea.High-output HF is associated with normal or greater-than-normal cardiac output and decreased systemic vascular resistance.The associated decrease in after load reduces arterial blood pressure and also activates neurohormones, which increase salt and water retention. Diseases that may result in high-output HF include anemia, large arteriovenous fistula or multiple small fistulas, severe hepatic or renal disease, hyperthyroidism, beriberi disease, and septic shock.In AHF, peripheral vascular flow and end-organ perfusion decrease, causing the body to compensate by neurohormonal activation , ventricular remodeling, and release of natriuretic peptides. These mechanisms are chronically activated in HF, but worsen during acute exacerbations, resulting in hemodynamic abnormalities leading to further deterioration. Continued progression can result in a critical reduction to end-organ blood flow, leading to severe morbidity and mortality.Patients with HF are classified into one of four classes, primarily determined by daily function, using the New York Heart Association, American College of Cardiology/American Heart Association, or European Society of Cardiology Guidelines .

We conducted the study at a refugee clinic and at resettlement and post-resettlement agencies

Past 30-day HSD use at follow-up was significantly lower for intervention patients . While the control group reported no change in HSD use over time , the intervention group reported a significant unadjusted mean reduction of 4.4 days from baseline to follow-up . Among the 47 participants who provided urine samples, those in the intervention group were less likely than controls to test positive for their HSD . A logistic regression analysis for testing HSD positive that controlled self-reported baseline HSD use confirmed that intervention group participants were less likely than those in the control group to test HSD positive at follow-up . In the intent-to-treat linear regression model with multiple imputation of missing values , intervention patients reduced their HSD use an average of 4.5 more days in the past month than did controls, controlling for baseline HSD use, high school graduation, number of children under 18 living with them, and having been sexually assaulted before they were 18 years old. The complete sample regression with the same covariates for the 51 patients with follow-up data produced similar results , with intervention patients reducing their HSD use an average of 5.2 more days than controls . Finally, among the 32 patients in the complete sample who reduced their HSD use by a day or more, 28 patients who reported risky alcohol use reduced that use by an average of 0.3 days and 17 patients who disclosed smoking reduced their tobacco use by an average of 2.5 days . Neither change was significant . In this study of mostly Latino primary care patients of an FQHC, the QUIT brief intervention group reported a 40% decline in mean HSD use, corresponding to an adjusted 4.5-day reduction in reported past month HSD use by 3-month follow-up compared to controls ; there was no compensatory increase in use of alcohol or tobacco. This degree of drug use reduction is meaningful clinically according to norms for reductions in marijuana use in clinical trials . The trial has clinical significance as its findings could apply to 12% of our study clinic patients that screen positive for risky drug use ,drying room and represents significant potential public health impact for the 20 million risky drug users in the US if replicated in other clinic populations , 2012; U.S.

Department of Health and Human Services Office of the Surgeon General, 2016. The findings are important given the limited number of randomized trials of screening and brief intervention for risky drug use in primary care, and notable in that the findings affirm the positive findings of the QUIT trial. Some distinctive characteristics of the QUIT intervention that may contribute to its greater success than other brief intervention protocols designed to address risky drug use in primary care include: use of primary care clinicians to deliver brief advice messages about drug use; regular weekly “learning community” meetings among health coaches and the study team; incorporation of quality of life issues patients spontaneously raised as barriers to drug use reduction into telephone coaching sessions; embedding of drug use consent and patient assessment questions within a larger behavioral health paradigm to conceal the study’s drug focus and minimize potential contamination of the control group; and patient self-administered assessment of drug use on tablet computers. The original QUIT study, showed a significant reduction in HSD in 30-day risky drug use , 3.5 day reduction in the completer analysis in intervention compared to control patients . The positive outcomes in all of these different clinics bolstered by positive outcomes from this pilot replication suggest that QUIT may prove effective and implementable in a variety of settings and across a variety of patient demographics. Limitations of the study include: generalizability of the sample to other Latino populations, potential for social desirability bias to influence the primary outcome of self-reported drug use reduction which we tried to minimize by patients’ self-administration of survey items on a tablet computer, loss to follow-up, and small sample size which limits subgroup analysis. Over three million refugees have been resettled in the United States since Congress passed the Refugee Act of 1980.1 In 2015, there were nearly 70,000 new refugee arrivals, representing 69 different countries.1 Refugees undergo predeparture health screening prior to arrival in the U.S., and are typically seen by a physician for an evaluation shortly after arrival.

Refugees are resettled in areas with designated resettlement agencies that assist them with time-limited cash assistance, enrollment in temporary health coverage, and employment options. Refugees are initially granted six to eight months of dedicated Refugee Medical Assistance, which is roughly equivalent to services provided by a state’s Medicaid program.Following this period, refugees are subject to the standard eligibility requirements of Medicaid.3 It is important to highlight the differences between a refugee, an asylum seeker and a migrant, as this study focuses specifically on refugees. A refugee is an individual who has been forced to leave his or her home country due to fear of persecution based on race, religion, nationality, membership in a social group, or policital opinion. Refugees undergo robust background checks and screening prior to receiving designated refugee status. They are relocated only after undergoing this screening process, and have legal protection under the Refugee Act of 1980 given their status as a refugee. An asylum seeker, on the other hand, is an individual who has fled his or her home country for similar reasons but has not received legal recognition prior to arrival in the U.S. and may only be granted legal recognition if the asylum claim is reviewed and granted. As a result, asylum seekers do not have access to services such as Refugee Medical Assistance, time-limited cash assistance, or similar employment opportunities. Migrant is a general term and refers to an individual who has left his or her home country for a variety of reasons.Prior studies have shown differences in utilization of the emergency department by refugees in comparison to native-born individuals.In Australia, refugees from non-English speaking countries are more likely to use ambulance services, have longer lengths of stay in the ED, and are less likely to be admitted to the hospital.A study conducted in the U.S. evaluated refugees one year post-resettlement and demonstrated that language, communication, and acculturation barriers continue to negatively affect their ability to obtain care. These data suggest that there may be unidentified opportunities for improving the acute care process for refugee populations; however, little is known about how refugees interface with acute care facilities. Therefore, the goal of this study was to use in-depth qualitative interviews to understand barriers to access of acute care by newly arrived refugees, and identify potential improvements from refugees and community resettlement agencies. The refugee clinic was located at a tertiary care hospital in a city in the Northeast U.S. The clinic has been in operation for approximately five years and has cared for approximately 200 refugee patients yearly. At the time of the study,vertical farming units the clinic received referrals from one of the three resettlement agencies in the city. Refugee patients were seen within 30 days of arrival. Most refugees were seen for screening evaluations and transitioned to clinics near their homes after twoto three clinic visits. Refugee patients were eligible for this study if they were over 18 years of age, had capacity to consent, and had no hearing difficulties. We excluded refugees if they were deaf, unable to answer questions from an interpreter, or had acute medical or psychiatric illnesses. In the city in which the study was performed, there are three main resettlement agencies and approximately three well-known post-resettlement agencies. Resettlement agencies are responsible for receiving new refugee arrivals and assisting individuals with support for three to six months after arrival.

Resettlement employees assist refugees with establishing housing, employment, transportation, primary care, and language services. After three to six months, refugees are able to seek additional assistance at post-resettlement agencies. Post-resettlement agencies provide additional support in terms of support groups, language services, cultural activities, and case management. Employees were eligible for this study if they worked at a resettlement or post-resettlement agency, were over 18 years of age, and had no hearing difficulties. This was an in-depth interview study using semi-structured, open-ended interviews. Separate interview guides for refugees and resettlement agency employers were developed by all members of the study team. Study team members included the following: an emergency physician and investigator with expertise in qualitative methodology ; an internal medicine physician with many years of experience working at the refugee clinic ; a third-year emergency medicine resident with three years of experience working bimonthly at the refugee clinic ; a second-year EM resident with no experience at the refugee clinic , an MD/PhD student with three years of experience working at the refugee clinic and content expert on refugee studies ; and an undergraduate student with two years of experience working at the refugee clinic . The study team composition allowed for a range of expertise with individuals who had experience working with refugees and those who did not. Questions were vetted among the all members of the study team and revised to ensure that content reflected the goals of the study. Prior to interviewing resettlement and post-resettlement employees, a resettlement/post-resettlement employee interview guide was developed using the same process. Refugee interviews were conducted in person at a refugee clinic, and refugees were recruited during the study period when an interviewer was present during clinic hours. Refugees were asked to participate if a room and interpreter were available. If the aforementioned conditions were met, all refugees awaiting clinic appointments or available after their appointment were asked to participate. All of the refugees who were asked agreed to consent and participated. Interviews with refugees were conducted by two members of the study team using the Refugee Interview Guide and lasted approximately 30 minutes. A phone interpreter was used for verbal consent prior to participation and for the interview. Demographic information was collected about each participant . After interviews were completed for refugee patients, a second phase of semi-structured, open-ended, interviews were conducted in person at local resettlement and post-resettlement agencies in the region. We obtained a list of employees involved in case management, health coordination, and program development for refugees/immigrants from resettlement healthcare teams. These employees were contacted via email with information regarding the study and consent form. Of 13 employees contacted, 12 participated. Employee interviews were conducted at their respective agencies, and verbal consent was obtained prior to participation. Interviews with resettlement employees were conducted by two members of the study team using the Resettlement/Post-resettlement Employee Interview Guide and lasted approximately 20 minutes. This study was approved by the institutional review board at the University of Pennsylvania.A total of 16 interviews were completed with refugees. Participants had a mean age of 34 and 50% had completed high school. Countries of origin were Syria , Bhutan , Democratic Republic of the Congo , Burma , Sudan , Iraq , Iran and the Central African Republic . Most refugees seen at this refugee clinic undergo medical screening within one to two months of arrival. A few of the patients remained at the clinic for long-term follow-up. All refugees required an interpreter and all interpretation was done with phone interpreters. A total of 12 interviews were completed for resettlement and post-resettlement agencies. Resettlement employees interviewed represented two resettlement agencies and two post-resettlement agencies. We identified several barriers to access of acute care facilities by newly arrived refugees . The process by which refugees seek care and barriers at each step can be visualized in Figure 1.Our principal findings identify barriers throughout the process of accessing acute care for newly arrived refugees. Overall, refugees face uncertainty when accessing acute care services because of prior experiences in their home countries and limited understanding of the complex U.S. healthcare system. The unfamiliarity with the U.S. healthcare system drives refugees to rely heavily on resettlement employees as an initial point of triage or, if they are very sick, to call 911. At the resettlement agency, employees express concern about identifying the appropriate level of care to which to send a refugee client.

The main outcome variable for these analyses was COVID-19 testing and was assessed via self-report

The COVID-19 pandemic, caused by the SARS-CoV-2 virus, took the world by surprise in early 2020 and resulted in unprecedented disruptions to normal life throughout the world as measures were put in place to control the spread of the deadly virus . Across North America, COVID-19 swept across the United States and Canada overwhelming health services and health infrastructure as cases exploded, hospitalizations exceeded capacity, and businesses and public programs like schools were forced to shut their doors, go online, or on hiatus . The physical and social impact was enormous – death rates grew exponentially and the healthcare system was pushed to exceed capacity in the face of enormous caseloads and a virus that spread rapidly . As schools, clinics, social venues, and otherwise non-essential businesses shut their doors, the most vulnerable in our society including those marginally housed, those experiencing substance use and/or those with mental health issues were even further marginalized as a result of lost services and support . Early in the pandemic, signs of increases in substance use raised concerns that substance use would skyrocket . Overdoses and particularly overdose deaths hit unprecedented levels and partially because of the reduced availability of emergency medical services . People living with HIV and particularly those who are not virally suppressed, were considered to be at heightened risk for COVID-19 serious consequences because of being immunocompromised and experiencing high prevalence of comorbidities . Among such individuals are people who use drugs and those with mental health problems . Therefore, understanding patterns of who did not obtain COVID-19 testing among PWUD and PLWH provides insight into how those with intersectional challenges may have experienced systematic exclusion from public health initiatives during the COVID-19 pandemic. This may shed light on strategies that may help us enhance access to testing among marginalized populations who experience health inequities in a future public health crisis.

To assess the impact of the COVID-19 pandemic among those confronting multiple challenges such as substance use and HIV,cannabis drying trays a consortium of NIDA funded cohorts entitled the Collaborating Consortium of Cohorts Producing NIDA Opportunities launched a specially designed survey administered three times during the pandemic. The C3PNO COVID-19 survey module contained specific measures for PWUD and PLWH. These data provide insight into the compelling questions of change in the levels of substance use among those enrolled in the cohorts many of whom have been using long term, been in substance use treatment, and have heavy use . Moreover, the results may demonstrate the extent to which critical COVID-19 public health interventions such as testing for the virus reached PWUD and PLWH. The C3PNO consortium was uniquely positioned to identify impacts of the COVID-19 pandemic on PWUD and PLWH as its cohorts following large numbers of such individuals across North America. The analyses described herein focus on the prevalence and factors associated with COVID-19 testing among PWUD and PLWH who participated in the first two rounds of the C3PNO COVID-19 module. C3PNO was established in 2017 by the National Institute on Drug Abuse to enhance data sharing opportunities and mechanisms to facilitate collaborative research efforts among NIDA-supported cohorts that examine HIV/AIDS in the context of substance misuse. Details of the participating cohorts and other methodology have been previously described but in sum, the C3PNO consortium is comprised of nine NIDA cohorts located in major cities throughout North America with a combined sample size of up to 12,000 active participants. Some cohorts had initial enrollment criteria that participants be people who inject drugs while other cohorts are young men who have sex with men. The consortium links a wide range of behavioral, clinical, and biological data from diverse individuals at high-risk for HIV or living with HIV participating in the cohorts. Starting in May 2020, the consortium launched a survey to examine patterns of substance use, substance use disorder treatment, and utilization of HIV prevention and care services in the midst of the COVID-19 pandemic.

Specific domains collected as part of the survey included overall impact of the COVID-19 pandemic and related governmental/societal restrictions on day-to-day life, adoption of COVID-19 prevention practices, COVID-19 testing and symptomatology, changes in substance use behaviors as well as reports of pandemic impact on access, quality, and pricing of illicit substances. The survey also included various measures of mental health including anxiety as well as access to medical care and substance use treatment. At the time of this study COVID-19 testing was available and recommended mostly for those with symptoms defined by the CDC at the time as the most predictive of COVID-19 infection including: fever, feeling feverish, chills, repeated shaking with chills, muscle aches or pain, runny nose, sore throat, new or worsening cough, shortness of breath, nausea or vomiting, headache, abdominal pain, diarrhea, and sudden loss of taste or smell. In the survey module current symptom reports were collected. Eight of the nine C3PNO cohorts participated in both of the first rounds of data collection but one was unable to share its data – all nine cohorts joined for later rounds. Each participating cohort contacted a minimum of 200 of their cohort members to participate in the survey eligible if they: were previously enrolled in one of the eight participating C3PNO cohorts; participated in a recent study visit ; were English and/or Spanish speaking; and willing and able to complete the survey remotely. Cohort investigators were encouraged to enroll participants who had a recent history of substance use as determined by self-report at their most previous visit. The survey was either self-administered through a web based survey for participants that had computer and internet access or interviewer administered by telephone for those participants without online access. The survey took approximately 20 min to complete and participants were remunerated for their time. The study was approved by the institutional review boards of the consortium cohorts and each participant provided informed consent for their study participation. There were 4035 responses to the survey across all eight cohorts that participated in both of the first two rounds and collected fully analyzable data; 3762 were available for this analysis because the Canadian cohorts confronted restrictions with sending data and were not able to be included.

The analyzed data for this manuscript includes data from 2331 individuals who completed one or both of the first two rounds of the C3PNO COVID-19 module. Participants were offered participation in each round of the survey regardless of participation in first round. This resulted in 1431 from individuals responding to both rounds. The first round was conducted from May-November 2020 and the second round from October 2020 through April 2021. Median time between surveys for participants who completed both rounds of the survey was 4.1 months . The time to implement the survey was a window period starting from when the programmed survey was made available for each round . Intervals are overlapping because some cohorts had not finished their first round when the first cohorts to implement started their second round. The survey was implemented in a very challenging time of research administration with entire components of universities shut down for months delaying aspects of survey conduct such as reviews of research and procedures for compensation. Therefore, the cohort research teams did the best they could to administer the survey when available and to reach the requested minimum number of participants and there was a range in time as to how long it took them to be able to collect data. Moreover,heavy duty propagation trays the implementation of the survey resulted in different time frames required by cohort research teams to complete the data collection. Those that sent links to web-based questionnaires and had participants who were responsive to these completed the rounds relatively quickly . Other cohorts had many older participants who had to be interviewed by telephone . These teams required much more time to reach participants and conduct the interviews. We implemented and conducted this research in a unique and challenging time in history that required some flexibility and innovations in data collection. This means because of the geographic range captured in these surveys, participants in different cities responded during different phases of the pandemic. Finally, given the burdens on the cohort staff to implement this study in addition to their other work, systematic data on refusal rates were not able to be collected. Specifically, participants were asked if they were tested for COVID-19 and if yes, if they have ever tested positive. Participants were also asked if they had symptoms of COVID-19. Participants were considered to have recent substance use if they reported using any of the following substances in the past month: methamphetamine, cocaine, heroin, fentanyl, or misused prescription opioids.

Alcohol, tobacco, and cannabis use were also assessed but are not the focus of these analyses. Univariate analyses provided descriptive statistics for the sample overall and by COVID-19 testing status. Comparisons of demographics, substance use and frequency of use, as well as HIV-status by COVID-19 testing status were based on t-tests, chi-square methods, and other non-parametric tests as appropriate while adjusting for the effect of the subject . Factors associated with the outcome of interest were assessed using regression analysis with generalized estimating equations in order to account for the within-subject correlations. This large survey of COVID-19 testing experience among cohorts that follow people living with HIV and people who use drugs across North America provides a snapshot of how the COVID-19 pandemic in its first year may have impacted those who live on the margins of society. This sample included those among the most socially vulnerable in North America – over half were unemployed before the pandemic, about one third food insecure, many people of color almost half of whom were living with HIV. Many of these individuals are not in the formal economy that may partially explain why only half of them were tested for COVID-19 – the entry point into COVID-19 prevention . It is also of concern that across surveys those reporting having COVID-19 symptoms did not have higher testing rates than those who didn’t report symptoms, although the recommendation and priority for COVID-19 was testing of those symptomatic early in the pandemic when testing was limited by supply of tests. Testing continues to be a pillar of COVID-19 control; especially before vaccine availability when these surveys were implemented . Our findings show that lack of COVID-19 testing was associated with markers of social marginalization such as unemployment. As many workplaces began offering testing to their employees, this can explain why unemployed had less opportunity for testing. Fewer Black participants reported testing, and this parallels what has been seen in studies of the more general population . This may be related to historical mistrust with the healthcare system and negative experiences of Black individuals with public health interventions that have previously exploited or misled them . Another key finding was that fewer PLWH reported COVID-19 testing than people HIV negative in these cohorts. That may be because our PLWH were older, more were Black, and more reported frequent substance use representing intersectional marginalization that may have kept them from accessing a COVID-19 test . The finding that fewer PLWH accessed COVID-19 testing suggests that COVID-19 services may have been less available in places they mostly access care such as their HIV treatment clinics because early in the pandemic there was less in-person HIV care. Moreover, it is possible that because PLWH have weakened immune systems they may been aware of their heightened vulnerability so vigilantly practiced masking and social distancing. While the substance use reported in the month before the survey does not seem high among cohorts of people who use drugs, it must be clarified that our study defined substance use by use of highly addictive, i.e. “hard” or street drugs such as methamphetamine, heroin, cocaine, fentanyl and prescription opioids. Use only of alcohol, tobacco and cannabis were not included in this analysis as the focus was on how the pandemic affected those who use highly addictive illicit substances that usually becomes a dominant part of their lives.

Studies of typically developing adolescents show increases in FA and decreases in MD

The simplest hypothesis being that B cell activation is associated with a down-regulation of the surface CB2 receptor. Alternatively, it has been reported that CB2 can form heterodimers with the CXCR4 chemokine receptor and has chemotactic properties that result in the selective homing of CB2 + and CB2 – B cells to different regions of lymphoid follicles [Basu 2013, Coke 2016]. We addressed the potential linkage between B cell activation and CB2 expression using two different approaches. CB2 is known to be expressed by B cell lymphomas and has been described as an oncogene [Jorda 2003, Perez-Gomez 2015]. We therefore examined a human B cell lymphoma cell line, SUDHL-4, that had been described to express an activated B cell phenotype. Consistent with a linkage between activation state and CB2 expression pattern, this cell line and two other lymphoma lines that exhibited an “activated” phenotype were found to exhibit high intracellular CB2 but no surface staining. In order to more directly test the linkage between B cell activation and CB2 expression pattern, we employed an in vitro model in which naïve mature human B cells obtained from umbilical vein cord blood were activated with a combination of receptor signaling and supporting cytokines [Ettinger 2005]. After 5 days in culture, the initial homogeneous population of naïve B cells had evolved into two obvious subsets: one that retained the naïve B cell phenotype and the other that exhibited an activated B cell phenotype . When examined for the expression of CB2, there was a clear distinction between these two subsets with a loss of extracellular CB2 only on the activated subset. Collectively,microgreen grow rack the evidence presented in this report points to a clear linkage between the acquisition of an “activated” B cell phenotype and specific regulation of CB2 protein expression.

With limited information regarding the nature of intracellular CB2, we employed a combination of confocal microscopy and marker co-localization studies to evaluate the distribution and location of intracellular CB2. It exhibited a diffuse but punctate pattern within the cytoplasm. This appearance was the same regardless of the type of cells studied – primary peripheral blood B cells, the SUDHL-4 cell line, or the 293T/CB2-GFP line that we had previously described [Castaneda 2013]. Using the 293T/CB2-GFP line, we compared the distribution of CB2 staining to the staining of mitochondrial and lysosomal markers. The sparse and well defined features of lysosomal staining did not match and were not pursued further. On the other hand, the punctate but diffuse pattern of mitochondrial staining shared some similarities to the pattern observed with CB2. This represented an interesting observation given our prior findings that THC can disrupt cell energetics and mitochondrial transmembrane potential in airway epithelial cells in a CB2- dependent manner [Sarafian 2008]. Along the same lines, Bernard and associates identified a similar effect on neuronal cells but ascribed this effect to intracellular CB1, which localized to mitochondria in their studies. However, there was no obvious co-localization between the CB2 receptor and mitochondrial markers when directly examined by dual staining and confocal microscopy. In summary, we can conclude that the expression of CB2 in human leukocytes appears to be specifically regulated with respect to the cellular location , the cell lineage being studied , and the state of B cell activation and differentiation . The presence of an activated phenotype on B cells is specifically associated with down-regulation of the surface CB2 receptor, a feature identified in B cells recovered from human tonsils and also observed in vitro when naïve B cells were stimulated to acquire an activated phenotype. Given the capacity for cell surface CB2 to form heterodimers with chemokine receptors and promote migration and homing and given the location of CB2 + and CB2 – B cells in different compartments within lymphoid follicles [Basu 2013, Coke 2016], it is possible that modulating surface CB2 during B cell activation plays an important role in trafficking.

The capacity for T cells, dendritic cells, and malignant B cells to respond to cannabinoids in a CB2-dependent manner has been well characterized [McKallip 2002, Roth 2015, Yuan 2002], yet these cells do not express CB2 on the cell surface. The logical conclusion is that intracellular CB2 must also be capable of mediating ligand-induced signaling and biological consequences. With the recent report byBrailoiu et al , there is now direct evidence for this. Given the high membrane solubility of cannabinoids, we hypothesize that the presence of CB2 at different locations within a cell provides a mechanism for cells to link receptor activation to different signaling and biologic consequences, resulting in an expanded functional heterogeneity of cannabinoids. The intracellular location of CB2 and the specific role of different receptors on biologic function remains to be determined but will likely be very informative in understanding cannabinoid biology. Adolescence is a time of subtle, yet dynamic brain changes that occur in the context of major physiological, psychological, and social transitions. This juncture marks a gradual shift from guided to independent functioning that is analogized in the protracted development of brain structure. Growth of the prefrontal cortex, limbic system structures, and white matter association fibers during this period are linked with more sophisticated cognitive functions and emotional processing, useful for navigating an increasingly complex psychosocial environment. Despite these developmental advances, increased tendencies toward risk-taking and heightened vulnerability to psychopathology are well known within the adolescent milieu. Owing in large part to progress and innovation in neuroimaging techniques, appreciable levels of new information on adolescent neurodevelopment are breaking ground. The potential of these methods to identify biomarkers for substance problems and targets for addiction treatment in youth are of significant value when considering the rise in adolescent alcohol and drug use and decline in perceived risk of substance exposure . What are the unique characteristics of the adolescent brain?

What neural and behavioral profiles render youth at heightened risk for substance use problems, and are neurocognitive consequences to early substance use observable? Recent efforts have explored these questions and brought us to a fuller understanding of adolescent health and interventional needs. This paper will review neurodevelopmental processes during adolescence, discuss theinfluence of substance use on neuromaturation as well as probable mechanisms by which these substances influence neural development, and briefly summarize factors that may enhance risk-taking tendencies. Finally, we will conclude with suggestions for future research directions.The developmental trajectory of grey matter follows an inverted parabolic curve, with cortical volume peaking, on average, around ages 12–14, followed by a decline in volume and thickness over adolescence . Widespread supratentorial diminutions are evident, but show temporal variance across regions . Declines begin in the striatum and sensorimotor cortices , progress rostrally to the frontal poles, then end with the dorsolateral prefrontal cortex , which is also late to myelinate . Longitudinal charting of brain volumetry from 13–22 years of age reveals specific declines in medial parietal cortex, posterior temporal and middle frontal gyri, and the cerebellum in the right hemisphere, coinciding with previous studies showing these regions to develop late into adolescence . Examination of developmental changes in cortical thickness from 8–30 years of age indicates a similar pattern of nonlinear declines, with marked thinning during adolescence. Attenuations are most notable in the parietal lobe,ebb and flow flood table and followed in effect size by medial and superior frontal regions, the cingulum, and occipital lobe . The mechanisms underlying cortical volume and thickness decline are suggested to involve selective synaptic pruning of superfluous neuronal connections, reduction in glial cells, decrease in neuropil and intra-cortical myelination . Regional variations in grey matter maturation may coincide with different patterns of cortical development, with allocortex, including the piriform area, showing primarily linear growth patterns, compared to transition cortex demonstrating a combination of linear and quadratic trajectories, and isocortex demonstrating cubic growth curves . Though the functional implications of these developmental trajectories are unclear, isocortical regions undergo more protracted development and support complex behavioral functions. Their growth curves may reflect critical periods for development of cognitive skills as well as windows of vulnerability for neurotoxic exposure or other developmental perturbations.In contrast to grey matter reductions, white matter across the adolescent years shows growth and enhancement of pathways . This is reflected in white matter volume increase, particularly in fronto-parietal regions .

Diffusion tensor imaging , a neuroimaging technique that has gained widespread use over the past decade, relies on the intrinsic diffusion properties of water molecules and has afforded a view into the more subtle micro-structural changes that occur in white matter architecture. Two common scalar variables derived from DTI are fractional anisotropy , which describes the directional variance of diffusional motion, and mean diffusivity , an indicator of the overall magnitude of diffusional motion. These measures index relationships between signal intensity changes and underlying tissue structure, and provide descriptions of white matter quality and architecture . High FA reflects greater fiber organization and coherence, myelination and/or other structural components of the axon, and low MD values suggest greater white matter density .These trends continue through early adulthood in a nearly linear manner , though recent data suggest an exponential pattern of anisotropic increase that may plateau during the late-teens to early twenties . Areas with the most prominent FA change during adolescence are the superior longitudinal fasciculus, superior corona radiata, thalamic radiations, and posterior limb of the internal capsule . Other projection and association pathways including the corticospinal tract, arcuate fasciculus, cingulum, corpus callosum, superior and mid-temporal white matter, and inferior parietal white matter show anisotropic increases as well . Changes in sub-cortical and deep grey matter fibers are more pronounced, with less change in compact white matter tracts comprising highly parallel fibers such as the internal capsule and corpus callosum . Fiber tracts constituting the fronto-temporal pathways appear to mature relatively later , though comparison of growth rates among tracts comes largely from cross-sectional data that present developmental trends. The neurobiological mechanisms contributing to FA increases and MD decreases during adolescence are not entirely understood, but examination of underlying diffusion dynamics point to some probable processes. For example, decreases in radial diffusivity , diffusion that occurs perpendicular to white matter pathways, suggests increased myelination, axonal density, and fiber compactness , but have not been uniformly observed to occur during adolescence. Similarly, changes in axial diffusivity , diffusion parallel to the fibers’ principle axis, show discrepant trends, with some studies documenting decreases , and others increases in this index . Decreases in AD may be attributable to developing axon collaterals, whereas increases may reflect growth in axon diameter, processes which are both likely to occur during adolescence. Technical and demographic differences such as imaging parameters, inter-scan intervals, age range, and gender ratios may account for divergent findings. Both grey matter volume decreases and FA increases in frontoparietal regions occur well into adolescence, suggesting a close spatiotemporal relationship . Changes in tissue morphometry are attributable to synaptic proliferation and pruning as well as myelination. Diminutions in gray matter density and concomitant brain growth in dorsal parietal and frontal regions suggest an interplay between regressive and progressive changes , and the coupling of these neurobiological processes is associated with increasingly economical neural activity .The increasing divergences in male and female physiology during adolescence are observed in sex-based differentiation of brain structure. Male children and adolescents show larger overall brain volumes , and proportionally larger amygdala and globus pallidus sizes, while females demonstrate larger caudate nuclei and cingulate gyrus volumes . Although cortical and sub-cortical grey matter volumes typically peak 1–2 years earlier in females than males , male children and adolescents show more prominent grey matter reductions and white matter volume increases with age than do females . The marked increase in white matter that occurs during adolescence is most prominent in the frontal lobe for both genders , though male children and adolescents have significantly larger volumes of white matter surrounding the lateral ventricles and caudate nuclei than females . Adolescent males also demonstrate a significantly higher rate of change in white matter volume particularly in the occipital lobe . Despite steeper white matter volume changes in males, maturation of white matter micro-structure may occur earlier in female than male adolescents .

The CB2 receptor is mostly found in peripheral tissue and mediates the immune regulating components of cannabinoids

Marijuana is a term that describes the dry leaves, stems, flowers, and seeds of the Cannabis sativa plant. It has been prohibited in the United States by federal law since the 1937 Marijuana Tax Act and the US Drug Enforcement Agency has classified it as an illegal schedule I drug . Schedule I drugs are considered the most dangerous class of drugs with potential for severe psychological and physiological dependence. Other schedule I drugs include heroin, ecstasy, and lysergic acid diethylamide . Marijuana can be smoked in hand-rolled cigarettes , in cigars that have been emptied out of its contents and refilled with marijuana , through vaporizers to avoid inhaling smoke, consumed in edibles, or brewed as tea. While the use of marijuana for medicinal, religious, and recreational purposes dates back 5000 years, the discovery of cannabinoid molecules and our understanding of how they interact with our endogenous human cannabinoid signalingsystem represents a relatively recent area of investigation [Aizpurua-Olaizola 2016, Herring 1998, Pertwee 2006]. Marijuana is composed of over 400 different compounds, including more than 100 different cannabinoids [Aizpurua-Olaizola 2016, Greydanus 2013]. Cannabinoids are the primary bioactive constituents of marijuana and the main psychoactive cannabinoid is delta-9-tetrahydrocannabinol . The THC content in the average illicit marijuana cigarette is reported by the Potency Monitoring Project to comprise approximately 12% by weight [ElSohly 2014]. Other cannabinoids also found in the Cannabis sativa plant include cannabidiol , cannabigerol , and cannabinol , but these are not considered to play a role in the psychoactive effects associated with marijuana consumption. Upon combustion and smoking, marijuana also liberates an array of polycyclic aromatic hydrocarbons including the known carcinogens benzopyrene and benzanthracene,grow rack which are components of the particulate phase of smoke [Roth 2001]. Toxic substances such as carbon monoxide, hydrogen cyanide, and nitrosamines are also released as part of the gas phase of marijuana smoke.

While all of these released constituents may have biologic and/or toxic consequences, the focus of this thesis work is on the cannabinoid constituents and specifically on the biology of the human type 2 cannabinoid receptor. The development of synthetic cannabinoids eventually led to the discovery of a human endogenous cannabinoid system that is comprised of at least two arachidonic acid-derived endocannabinoids, 2-arachidonoylglycerol and anandamide , their biosynthetic and degradative enzymes, and two cannabinoid receptors, CB1 and CB2. [Bisogno 2005, Cabral 2015]. The endocannabinoid system has been found to play a role in immunomodulation, metabolic regulation, bone growth, pain, cancer, and psychiatric disorders [Aizpurua-Olaizola 2016, Kleyer 2012]. Endocannabinoids are thought to be enzymatically produced and released “on demand” [Cabral 2015]. They bind and activate seven-transmembrane G protein-coupled receptors type I and type II and are linked to intracellular signaling cascades including adenylyl cyclase, cAMP, mitogen-activated protein kinase, and intracellular calcium [Howlett 2002, Maccarrone 2015]. Cannabinoid receptors, CB1 and CB2, share 44% amino acid homology and bind THC with relatively equal affinity [Cabral 2015, Shire 1996]. They are expressed in most organ systems, and their activation by marijuana smoke can have wide-ranging health effects [Grotenhermen 2003, Volkow 2014, Turcotte 2016]. The CB1 receptor is mostly found in the central nervous system and mediates the psychoactive components associated with cannabinoids. CB1 has been described to play a role in memory, pain regulation,stress response, and the regulation of metabolism [Busquets Garcia 2016, Cabral 2015]. More specifically, the highest concentration of CB2 is found in immune cells in addition to lower concentrations found in bone cells, keratinocytes, adipocytes, and renal tissue [Basu 2011, Mackie 2006]. The CB2 receptor is suggested to play a role in immunomodulatory mechanisms that regulate inflammation and also play a role in host defense [Basu 2011, Herring 1998, Turcotte 2016]. Despite the widespread use of marijuana and its increasing legalization across multiple states in the U.S., there is relatively little information known about the effects of cannabinoids on human immunity.

Cannabinoids have been described to have anti-inflammatory effects on leukocytes. [Cabral 2015, Roth 2015, Volkow 2014]. In mouse studies, the CB2 receptor has been found to play a role in the responsiveness to infectious pathogens and play a role in immune homeostasis [Newton 1994, Newton 2009]. In human studies, alveolar macrophages from the lungs of marijuana smokers have been found to be deficient in the production of cytokines, nitric oxide, and mediation of bacteria killing [Baldwin 1997, Roth 2002, Shay 2003]. Human T cells activated in the presence of THC have also been found to result in a T helper type 2 -skewed pattern of cytokine production with limited proliferation [Yuan 2002]. With the highest levels of expression on immune cells, the CB2 receptor is suggested to mediate the immune regulating effects of cannabinoids [Cabral 2015, Roth 2015, Volkow 2014, Turcotte 2016]. In support of this statement, there are several studies done with animal models, including CB2 knock-out mice [Liu 2009, Turcotte 2016, Ziring 2006]. Although murine CB2 and human CB2 share 82% amino acid homology of the coding regions, there are significant differences in non-coding regions of their respective genes, suggesting that some inter-species differences likely exist with respect to regulation and expression [Liu 2009]. This potential difference argues that a combination of both animal models and human studies are required to understand the regulation and function of the CB2 receptor with respect to the immune system. Nonetheless, CB2 knock-out mice have been reported to exhibit higher levels of leukocyte recruitment and an over-production of pro-inflammatory cytokines [Buckley 2008]. While these mice do not exhibit obvious morphological differences they have also been noted to have abnormalities in the formation of several T cell and B cell subsets within lymphoid organs, making the CB2 receptor vital for the formation of T cell and B cells subsets involved in immune homeostasis [Turcotti 2016, Ziring 2006]. An increase in IgE production and allergic diseases would be expected in a model that is driven towards Th2 skewing [Agudelo 2008].

Surprisingly, THC treated CB2 knockout mice showed increased levels of IgE serum production, suggesting a role for CB2 receptor in the regulation of IgE [Newton 2012]. Immune suppression was also observed when THC was administered to tumor-bearing mice, which promoted tumor growth in a CB2-dependent manner [Zhu 2000]. Translating in vivo and in vitro experiments performed in animal or cell line models into an understanding of the biology in humans is also challenging because of the route of consumption, amount of exposure, and the pattern of use in marijuana users is entirely different, and there are often concurrent exposures of humans to tobacco, alcohol, and other substances that might affect the immune system in an additional or different manner.The previously described work suggests that cannabinoid receptors may be centrally involved in immune function, and therefore, the CB2 pathway may represent an attractive target for cannabinoid-based drugs. Cannabinoids have been promoted as a new class of drugs with the potential for beneficial anti-inflammatory, immunoregulatory, and anti-fibrotic effects [Atwood 2012, Pacher 2011, Turcotti 2016]. CB2 agonists have already been shown to reduce inflammation through the p38-MK2 pathway [Turcotti 2016]. There are currently multiple FDA-approved cannabinoid based medications. Marinol and Cesamet have been prescribed for the treatment of chemotherapy induced nausea and vomiting. Marinol has also been prescribed as an appetite stimulant and as a treatment for glaucoma by lowering intraocular pressure. Recently in July of 2016, SyndrosTM , an orally administered liquid formulation of dronabinol,greenhouse tables has also received FDA approval. It has been prescribed to treat anorexia associated weight loss in AIDS patients and chemotherapy induced nausea and vomiting. Also, Sativex, a sublingual spray that is composed of equal concentrations of THC and CBD, has received FDA approval to proceed with phase III clinical trials for the treatment of pain in patients with advanced cancer. It is also prescribed for the treatment of spasticity due to multiple sclerosis. CBD is of great therapeutic interest since it has been shown to have anti-emetic, anti-inflammatory, and anti-psychotic effects [Bergamaschi 2011, Cabral 2015, Turcotti 2016]. There have also been no effects observed on blood pressure, pulse, body temperature, or gastrointestinal and psychological function [Bergamaschi 2011]. Another cannabinoid formulation that contains only CBD, Epidiolex, is also undergoing phase III testing for the treatment of a rare genetic seizure disorder . Despite the Schedule I DEA classification assigned to marijuana , there is obvious evidence that strategies focused on regulating CB2 signaling might represent promising treatments for autoimmune or chronic inflammatory diseases. Understanding the expression and function of the human CB2 receptor may provide an important key to unlocking further cannabinoid-based drug development. The CB2 receptor has traditionally been described as a cell surface GPCR. GPCRs respond to a wide variety of stimuli and play crucial roles in neurotransmission, cellular metabolism, secretion, differentiation, growth, inflammation, and immune responses. GPCR activation is initiated by ligand binding, an event that usually occurs at the cell surface. Ligand binding induces a conformational change that activates heterotrimeric G-protein signaling and a subsequent cascade of events leading to internalization of the receptor and linkage with other signaling pathways [Jean-Alphonse 2011, Syrovatkina 2016]. The CB2 receptor has been reported to exhibit a complex pharmacology , signaling and trafficking pattern [Aizpurua-Olaizola 2016, Basu 2011, Howlett 2005]. The characterization of THC has led to the synthesis of cannabinoid analogs classified as synthetic cannabinoids, which are used to study structure-activity relationships, characterize cannabinoid-mediated bioactivity, and contribute to the understanding of mechanism of action by which endocannabinoids and phytocannabinoids exert their effects on the immune system [Cabral 2015].

The development of new ligands that can mimic the protective effects of cannabinoids has proven particularly difficult due to the constant discovery of multiple endogenous ligands, targets, and sites of interaction. Further research is needed to understand the mechanism of action of cannabinoids since the patterns of activation and induction of intracellular signaling differs with each compound. As demonstrated in CB2 transfected CHO cells, human HL-60, human bronchial epithelial cells, murine microglial cells, and a murine macrophage cell line, CB2 signaling is initiated through its interaction with heterotrimeric Gi-proteins and the inhibition of adenylyl cyclase [Turcotte 2016]. CB2 signaling has been linked to phosphorylation of MAP kinase, phosphorylation of AKT, modulation of intracellular calcium, and generation of intracellular ceramide [Basu 2011, Brown 2012, Chen 2012, Cudaback 2010, Howlett 2005, Turcotte 2016]. The mechanisms responsible for this signaling diversity have not been adequately explained. In studies with other GPCRs, it is often the process of receptor internalization that allows the receptor to become associated with an array of adaptor and signaling molecules [Calebiro 2010, Jean-Alphonse 2011]. The finding that CB1 receptor is expressed at intracellular sites and can mediate signaling adds further support for CB2 to play a role in mediating intracellular signaling [Rozenfeld 2011]. Rab proteins direct receptor trafficking to specific intracellular organelles, and CB2 receptors have been suggested to internalize via Rab-mediated endocytosis and initiate downstream intracellular signaling [Calebiro 2010, Grimsey 2011]. In artificial cell constructs, CB2 has been observed to undergo both constitutive and ligand-based internalization and traffic through endosomal and lysosomal compartments [Atwood 2010, Grimsey 2011, Kleyer 2012]. Blocking internalization or shifting the use of adaptor proteins has been observed to shift intracellular versus extracellular GPCR distribution [Grimsey 2011]. The dynamic balance between CB2 receptors at the cell surface and at possible intracellular sites might play a vital role in understanding cannabinoid receptor biology. The availability of cell surface receptors for ligand interaction can determine the responsiveness of a cell and further induction of intracellular signaling. Receptor availability for ligand binding is a very important feature in order to understand drug action and how the CB2 receptor can be exploited for therapeutic purposes. There is great diversity in the trafficking of GPCRs, and it is vital to understand the specific pathways involved with CB2. Localization of receptors at the cell membrane has been described to determine signaling via G protein pathways. Kleyer and associates also describe that the amount of cannabinoid receptor on the surface can directly determine receptor function. Interestingly, they also describe that cannabinoid receptors in primary human cells do not only internalize upon agonist interaction. They describe movement of the receptors between cytoplasm and cell membranes by ligand independent trafficking mechanisms, such as triggering by hydrogen peroxide that is present during inflammation and triggering by nonspecific protein tyrosine phosphatase inhibitors [Kleyer 2012].

Studies have found evidence of a protective effect of social network ties for adolescent substance use

Whereas our initial models tested the relationship between interdependent substance use behavior, they assumed that these effects are symmetric: that is, usage of one substance equally increases or decreases usage of another substance. In our next set of models, we relax this assumption and test whether usage of one substance increases behavior of another substance or decreases behavior , or both . These models were estimated separately as the combined model exhibited extreme collinearity. As shown in Table 3, there is a significantly positive creation function from marijuana use to drinking in both samples, implying that respondents’ marijuana use increased their odds of drinking initiation. Thus, one unit higher marijuana use made a nondrinker 62% and 60% more likely to start drinking rather than stay as a non-drinker at the next time point in Sunshine High and Jefferson High, respectively. On the other hand, the endowment function from marijuana use to drinking is not statistically significant at either school, implying that marijuana use does not affect the likelihood of stopping drinking behavior. The impact of marijuana use on smoking behavior differs across the two schools. We detect a statistically significant creation function in Sunshine High: a one unit increase in marijuana use increases the odds 62% that adolescent non-smoker will initiate smoking rather than stay as a non-smoker. There was no evidence of a statistically significant endowment function in Sunshine High. On the other hand, the pattern is reversed in Jefferson High with a statistically significant endowment function but a statistically insignificant creation function. Thus, in Jefferson High although marijuana use does not impact respondent’s likelihood of smoking initiation, one unit higher marijuana use made smokers 27% more likely to stay as smokers rather than quit smoking at the next time point.To understand the magnitude of these effects , we engaged in a small simulation study in which we omitted some of the effects from the SAB model shown in Table 2 and assessed the consequences for the level of substance use behavior in the schools. That is,grow rack we changed a particular parameter value from the one estimated in the model to zero, and then simulated the networks and behaviors forward 1000 times.

We then assessed the average level of smoking, drinking, and marijuana use in the network at the end of the simulation runs. To save space, we only present the results for Sunshine High; see S2 File for the Jefferson High results, which were similar.The highest level of smoking is observed when we set to zero the influence effect of friends on smoking behavior, as the percentage of non-smokers drops from 72% in the original model to 63%, and the percentage of heavy-smokers increases from 11% to 18% . The pattern was similar in Jefferson High, with analogous values of 48% to 42%, and 31% to 35%. This corroborates the findings in previous simulation research that peer influence has a protective effect on smoking and drinking adoption. The lowest levels of smoking are observed in the hypothetical scenario in which marijuana use has no effect on one’s own smoking behavior, as the percentage of non-smokers rises from 72% to 81%, and the percentage of heavy-smokers decreases from 11% to 5%. The analogous values in Jefferson High were 48% to 54%, and 31% to 25%. Regarding drinking behavior, we see that the effect of one’s own marijuana use is particularly important as setting this effect to zero results in a decrease in drinking behavior . In the scenario of no effect of marijuana use on drinking behavior the percentage of nondrinkers rises from 50% to 59% and the percentage of heavy drinkers falls from 13% to 7%. The analogous values in Jefferson High were 35% to 42% and 16% to 10%. It is notable that setting the influence effect of friends’ drinking on one’s own drinking behavior to zero reduces drinking somewhat . In Jefferson High, the number of heavy drinkers rises from 16% to 20%. For marijuana usage, very pronounced strong effects are observed for friends’ influence . Setting this influence effect to zero results in a sharp decrease in non-marijuana users from 62% to 47%, and a parallel large increase in heavy users from 19% to 32%. In Jefferson High, the analogous values were 61% to 43% and 18% to 33%. In sum, when the effect from marijuana use to cigarette use is turned off, more non-smokers and fewer heavy-smokers are expected in both schools. When the peer influence effect with regard to each substance use is turned off, fewer non-users and more heavy-users of each substance are expected in both schools.

In the scenarios in which we set other parameters to zero, the simulation results indicated that the substance use distribution was not altered in either school.Overall, our findings indicate some evidence of sequential substance use, as adolescent marijuana use increased subsequent smoking and drinking behavior in our two school samples. Whereas some existing research has found evidence that marijuana use leads to use of these substances, an important contribution of our study was simultaneously taking into account the substance use behavior of adolescents’ peer networks and other social processes occurring in networks. We found that marijuana use resulted in more smoking and drinking in both samples. Our findings are partially consistent with Pearson et al. , who found that that marijuana users smoked cigarettes more over time. Our findings are suggestive that marijuana use increases both alcohol and cigarette use. In addition, we made a distinction between whether interdependent substance use going from marijuana to cigarettes and alcohol results in initiation, cessation, or both. We found that marijuana use resulted in drinking initiation in both samples, and smoking initiation in Sunshine High. In contrast, marijuana use decreased the likelihood of smoking cessation in Jefferson High. Previous literature suggests that alcohol use is not a prerequisite for the initiation of marijuana use and the effect of alcohol use on the onset of marijuana use has declined while that of marijuana use on the onset of alcohol use has increased since 1965 , and our findings are consistent with this prior literature. Moreover, we tested cross-substance influence effects, which assessed whether the substance use behavior of one’s friends on a particular substance affected an individual’s own use of the other two substances. We found no evidence that such effects exist in our samples. We did, however, find peer influence effects for each specific substance, which is consistent with multiple past studies. Note, however, that whereas one implication is that having more friends who use marijuana, for example, results in greater marijuana use behavior on the part of the individual, another implication is that having more friends who do not use marijuana results in less marijuana use behavior. This relative symmetry of influence effects is sometimes overlooked when interpreting influence results, and our simulation results confirmed that this influence effect is in fact more likely to have a negative effect on substance use behavior.

These results are similar to an earlier simulation study that found that increasing the amount of peer influence in two high schools diminished school level smoking and drinking behavior . These results are consistent with theoretical insights from the Dynamic Social Impact Theory, which would predict that youth in friendship networks would adopt the same substance use behaviors through peer influence pathways, likely through social proximity and consolidation of youths’ attitudes and behaviors in adolescent networks. This highlights that the presumption that influence effects will always increase behavior is not necessarily accurate. In fact, we might expect that the dominant norms in a context will drive the direction of influence effects: in a school with little substance use, the greater number of non-users will push adolescents towards non-use, whereas in a school with high levels of substance adolescents are more likely pushed towards greater use. Given the complexity of our agent-based network models,greenhouse grow tables we demonstrated the relative magnitude of the effects by combining a small-scale simulation with a strategy in which we constructed hypothetical models that set certain key effects to zero and simulated the networks and behaviors forward. A key finding was that in a simulated world in which one’s own marijuana use did not affect smoking or drinking behavior, there would be a notable decrease in overall levels of smoking and alcohol usage in these schools, even controlling for the complexity of these models. We also saw that marijuana use operates as a mechanism between friends’ marijuana use and one’s own smoking and drinking behavior, as adolescents’ use of marijuana is impacted by their friends’ marijuana use, and this then affects the adolescent’s level of cigarette and alcohol use. Furthermore, one of the strongest effects detected was the influence effect of friends’ marijuana usage, as this has a particularly strong relationship to adolescents’ own marijuana use. Our findings highlight the importance of understanding interdependence in the use of multiple substances in adolescence, particularly those which operate through peer influence effects within friendship networks. Another notable finding was that depressive symptoms increased smoking behavior in Jefferson High. This high school has a relatively high average level of substance use compared to Sunshine High. Perhaps in a social milieu with a high average level of drug use, adolescents reporting higher levels of depressive symptoms may be more likely to display higher levels of cigarette smoking as compared to those who report lower level of depressive symptoms, given that past studies link depression and adolescent smoking. There are some limitations to note in this study. First, the time lags between the two sets of waves are not equal . Although it is preferable to have equal time periods, we performed a post hoc time heterogeneity test to ensure that the co-evolution of substance use behaviors and friendship networks was not significantly different across the three waves, or two time periods. Second, our SAB model specification is data intensive and can only be estimated for the two large schools among the 16 saturated schools in Add Health which are feasible for this type of analysis.

This limits generalizability and does not allow assessing why the interdependent effect from marijuana use to smoking is different across the two schools. Third, we had indirect information about marijuana use at time one, for a large percentage of the sample. Using this indirect information allowed us to avoid discarding a large amount of information at t1, however with a relatively small amount of potentially misclassified cases. Fourth, while the data are relatively old, we are aware of no evidence that the mechanisms of in person friendship formation, as captured in these Add Health network data,have changed significantly since the mid-nineties. In the current study, friendship networks were constructed through name generator items instead of real-time communication technology such as cell phone use. While future studies are needed to leverage existing technology such as cell phone usage for collecting adolescent social network data, these in person network data are likely still meaningful. Moreover, research suggests that cell phones help reinforce and reproduce existing social roles and structures rather than alter them. That said, future studies are needed to collect nationally representative contemporary data from US adolescents and investigate how the findings herein would be different if such technology was considered.Our findings have important implications for future studies. First, our findings suggest both feasibility and merit in exploring concurrent or sequential substance use behaviors across multiple time periods. Interdependence in substance use should be studied within one single model framework with multiple simultaneous on-going processes to reduce the risk of over-estimation of each process due to the auto correlation among them. Second, further explication of the interdependent effects from marijuana use to smoking and drinking is a useful direction for future research. Third, given smoking rates among adolescent youth have decreased significantly since the mid-1990s, more recent data are required to test whether our findings from these two Add Health large schools can be replicated in future research. Our findings also have practical implications for health behavior change interventions targeting adolescent substance use. Moreover, other research indicates that social networks can be leveraged for health behavior change interventions and may even be superior to non-network based interventions . Peer network based interventions targeting adolescent substance use might address the possibility that marijuana use increases alcohol and cigarette use.