Monthly Archives: March 2023

Another explanation is that the SLOE format itself may be driving the lack of a difference

Additionally, while there are several studies analyzing traditional letters of recommendation for language variation between genders, there is a gap in the current literature in analyzing standardized letters of recommendation. Previously, our research team published a study in Academic Emergency Medicine Education and Training that showed minimal differences in language use between genders in evaluating 237 SLOEs from applicants invited to interview to a single academic EM residency for the 2015-2016 application cycle.The small dataset, and potential for a homogeneous sample , prompted the current investigation with a goal of confirming or refuting the original results with a larger dataset. The choice to include all applicants was made with a goal of potentially increasing the variability in the language used within the SLOE . The aim of this study was to compare differences in language within specific word categories to describe men and women applicants in the SLOE narrative for all applicants to a single academic EM residency program for the 2016-2017 application cycle. We secondarily sought to determine whether there was an association between word categories’ differences and invitation to interview, regardless of gender, in order to better contextualize the possible importance of wording differences.We used descriptive statistics to report the applicants’ characteristics and assessed for differences in baseline characteristics by gender using t-tests and chi-squared tests, as appropriate. Median word counts for the identified 16 categories of interest were reported. For the primary outcome of interest, we assessed differences by gender in word counts after adjusting for letter length using Mann-Whitney U tests. In secondary analysis,rolling benches hydroponics the analyses were repeated for differences in word categories by invitation to interview.

We used multi-variable logistic regression to identify word categories associated with receiving an invitation to interview. Covariates in this model were selected via a predetermined inclusion threshold of α = 0.10. We performed all analyses using Stata 13.1 . Additionally, for any of the seven user-defined word categories in which a difference was noted, a further analysis was conducted evaluating the use of each individual word in the dictionary to assess if the difference for the category was driven by the use of a single word , or by the use of multiple descriptors within the category. For this analysis, the proportion of SLOEs with each word included was compared by gender using Fisher’s exact test. This analysis was not conducted for any differences in the LIWC defined categories due to the size of the word dictionaries .This analysis found small but quantifiable differences in word frequency between genders in the language used in the SLOE. In this study, differences between genders were present in two categories: social words and ability words, with women having higher word frequency in both categories. Our prior investigation found differences of similar magnitude in affiliation words and ability words, with letters for women applicants having higher word frequency in both categories. For both studies, the differences in word frequency were statistically significant, but it is difficult to comment or draw conclusions about the significance of these small wording differences on application or educational outcomes. What is perhaps more notable than the presence of differences in two categories is the lack of difference in the remaining 14 categories. When looking specifically at the categories that had gender differences, our finding of ability words being used to describe women applicants more frequently than men applicants is in contrast to previous studies, while our other research finding, that women are more frequently described with social words than men, is in alignment with previous studies. In the medical literature, letters of recommendation for men applying for faculty positions contain more ability attributes such as standout adjectives and research descriptors than letters for women,and letters for women in medical school applying for residency positions are more frequently described by non-ability attributes such as being caring, compassionate, empathetic, bright, and organized.Looking specifically at ability words, this word category had statistically significant differences in both this investigation and our prior study, with ability words occurring more frequently for women than men.

Ability words include descriptors such as talented, skilled, brilliant, proficient, adept, intelligent, and competent. This consistency of findings between the two samples suggests that letter writers employ multiple descriptors within the ability category to convey proficiency of women applicants. However, the reason for this difference is unclear. Notably, the word “bright” is one of the ability words for which there was no gender difference found, counter to findings from prior research wherein women applicants were more often described as bright.While the descriptor “bright” is often considered a compliment, it has also been suggested that its use “subtly undermines the recipient of the praise in ways that pertain to youth and, often, gender” stemming from its association with the phrase “bright young thing.”The finding that women were more frequently described with social words aligns with previous studies of letters of recommendations. Studies in letters of recommendation for psychology and chemistry faculty positions have shown that women are often described as communal , while men are described as agentic and have more standout adjectives .Other studies have found women to be described as more communicative.We employed a secondary analysis with respect to the invitation to interview to determine if small differences in word categories were associated with invitation to interview. The adjusted analysis showed an association between more standout words and invitation to interview; however, this analysis did not account for other factors that may influence invitations to interview . Although these findings represent an association and not causation, they help to contextualize the potential importance of small differences in word use, although this is not conclusive. Notably, neither social words nor ability words influenced the choice to interview, and there was an equitable frequency of standout words between genders. Despite the small word differences in the categories of social and ability words, we did not find a difference in the 14 other word categories queried.

There are several possible explanations for this lack of a finding. It is possible that the sample was under powered to detect small wording differences in the 14 word categories.The short word format of the SLOE and specific, detailed instructions as noted above may reduce bias. Other explanations include the increasing use of group authorship, which may introduce less bias than individual authorship. In 2012, a sampling of three EM residencies calculated that 34.9% of SLORs were created by groups.In 2014, 60% of EM program directors participated in group SLORs, 85.3% of departments provided a group SLOR, and 84.7% of PDs preferred a group SLOR.Although the sample size and lack of a standard comparator limit the ability to determine why we did not find a difference for the majority of word categories, we hypothesize that it is related to the format and hope to further support that hypothesis through future work examining paired SLOE and full-length letters for candidates. A recently published study by Friedman and colleagues in the otolaryngology literature has been the only study,hydro tray in addition to our own, to our knowledge that evaluates a standardized letter for gender bias. In this 2017 study, the SLOR and more traditional NLOR in otolaryngology residency applications were compared by gender, concluding that the SLOR format reduced bias compared to the traditional NLOR format. Although in both letter formats some differences persisted , the SLOR format resulted in less frequent mention of women’s appearance and more frequent descriptions of women as “bright.”Although their analysis strategy differed from the one we used in this study, their findings parallel ours in that there are minimal differences by gender in a restricted letter format and highlight the need for further study of the how the question stem and word limitations may be intentionally built to minimize bias. Lastly, of note, our study focused specifically on differences in language use in the SLOE. This study does not evaluate the presence or absence of gender bias in the quantitative aspects of the SLOE, nor does our multi-variable model include other factors that would influence the invitation to interview such as rotation grades, test scores, school rank, or AOA status. Such analyses were beyond the scope of our study, which was focused on the SLOE narrative itself. Other studies have evaluated this but have not evaluated the narrative portion of the SLOE.Additionally, there remain many other forms of evaluation, numerical and narrative, in medical training, in addition to the SLOE that have analyzed gender bias. Recent studies have suggested that bias persists in other forms of evaluation. Specifically, Dayal and colleagues’ recent publication notes lower scores for women residents in EM Milestones ratings compared to male peers as they progress through residency.Evaluations of narrative comments from shift evaluations are another area of interest, of which we are aware of two current investigations underway in EM programs. Additionally, a study of evaluations of medical faculty by physician trainees by Heath and colleagues also showed gender disparities.As this body of literature continues to grow and interventions are developed to minimize bias in all narrative performance evaluations, we believe it will be important to think carefully about the question stems and response length allowed. Unfortunately, limiting space may also limit the room for positive evaluation and strings of praising adjectives.

However, while implicit bias exists, employing limits in response format may rein in the manifestation of implicit bias by focusing the writer.This was a single center study; only SLOE narratives from applicants who applied to interview at a single, academic EM residency program were included in analysis, and applicants from non-LCME schools were excluded, limiting generalizability. The man to woman applicant ratio in this study reflects the national trend for the 2017 match, which may contribute to generalizability.ERAS does not allow an individual program to access SLOEs for applicants who have not selected that program; therefore, a full national sample of all applicants in a single year to ERAS was not feasible. Our analysis used the LIWC linguistic software and focused on individual words. Other approaches, such as qualitative content analysis or focusing on phrases or searching for specific words as was done by Friedman and colleagues in the study discussed above may have yielded different findings. Additionally, the LIWC contains pre-established word lists. While these lists have been used in medical literature,it is possible that there may be a set of words for EM that is more applicable. Our analysis used word frequency as a measurement of biased language and did not evaluate context of the words in the letters, limiting the study. Words in different contexts can have different meaning. For instance, the word “aggressive” can have both a positive or negative connotation based on context when describing and applicant as “aggressive in picking up patients” vs “aggressive with consultants.” A qualitative analysis of the SLOEs would better delineate the context of word phrases and provide a more in-depth analysis. Although it is a limitation that we did not evaluate word context, word frequency software applied to a large sample gives generalizability that a small qualitative analysis may not be able to achieve. In these rare instances of context misinterpretation for positive and negative emotion categories , this may be of little overall consequence as there is such a large margin between median positive vs negative words within these word categories . Additionally, the subtle differences between word phrases such as “we strongly recommend this student” vs “we will be recruiting this student” would not be picked up by the software. This was an exploratory study and as such was not powered to a specific outcome; however, we estimated that with our sample size of 822 that we would have 80% power to detect a difference of 0.2 mean words within a single word category with a 5% type I error . Additionally, it is possible that the sample was under powered to detect small wording differences among the 16 word categories, which could represent a type II error. The analysis for differences in 16 categories raises the question of the multiple comparisons problem.

Numerous risk factors have been identified in the literature

While clinicians feel safe discharging a patient with negative test results, believing that testing did not reveal any cause for emergent treatment or admission, this news may produce the opposite effect in patients due to this diagnostic uncertainty and fear of the unknown cause of their complaints. This lack of diagnostic certainty may lead patients to return to the ED in the hope of finding an answer or out of fear if the symptoms return.The psychological component experienced by patients during their ED encounters is often overlooked and is a potential area of focus for study and improvement. All types of pain appear to increase the odds of ED returns in older adults. Furthermore, pain complaints may be predictive of frequent returns , particularly in those discharged from the ED with a prescription opioid.Patients discharged with prescription opioids who are properly educated on prescription opioid medications may be less likely to experience opioid-related adverse events, potentially minimizing ED recidivismThe presence of certain comorbid conditions such as depression, heart disease, diabetes, stroke, and cancer also increase ED recidivism in older adults.Poor mental health, depression, and diabetes were predictive not only of 30-day returns but of frequent returns.A history of psychiatric disorders is a common risk factor identified in several studies with one reporting it as predictive of frequent ED visits.In a study of low-income, home bound older adults with depression, a positive association was found between the Hamilton Rating Scale for Depression scores and frequency of ED visits.Non-cardiac, non-traumatic body pain was the most common reason for recidivism in this older adult population suffering from depression,hydroponic racks highlighting the well-established link between depression and pain. While the literature suggests that specific comorbid conditions are associated with increased recidivism, overall comorbidity burden, as measured by the Charlson Comorbidity Index, is not.

Although intuitively it would seem that patients with high co-morbidity burden would be more likely to return to the ED, La Mantia et al. found no association between Charlson comorbidity scores and ED recidivism.The presence of chronic illness in older adults returning, often frequently, to the ED suggests that at baseline these high-risk patients are sicker with a high burden of comorbidities requiring treatment with multiple medications. This likely explains the reporting of polypharmacy as an independent predictor for 30-day ED returns in older adults.Additionally, recent hospitalization, an indicator of clinical illness severity, was also found to be an independent predictive factor for repeat and frequent ED visits in older adults.Reasons for returning to the ED in this older adult population suffering with chronic illness may stem from the following: seeking reassurance regarding their condition; noncompliance with treatment plans leading to complications; compliance with treatment plans but still developing complications from their condition; not understanding the course of their disease; or inadequate education regarding their discharge plan. Several psychosocial factors are associated with returns visits in older adults. These include lack of social support,marital status, and anxiety.Divorced, separated, or widowed patients have more than double the increased odds for early returns within 30 days; conversely, patients who never married were significantly less likely to return. An explanation proposed by McCusker et al. for this finding is that patients who never married are more self-sufficient and independent than those who are currently or have previously been married. Reporting a perceived lack of social support by the patient was predictive of both 30-day and frequent returns .Patients who are divorced, separated, or widowed may feel they have less social support than their married counterparts to assist in their healthcare needs. Other psychosocial factors reported in the literature include anxiety and substance abuse such as daily alcohol use. Naughton et al. found a 13% increase in the risk of revisits per one unit increase in anxiety scores on the Hospital Anxiety and Depression Scale.The association between anxiety and ED recidivism supported by the literature is not surprising, particularly when a patient may not receive a definitive cause for their symptoms.

Patients may experience fear and uncertainty regarding their health leading to anxiety.This coupled with a perceived poor social support system may lead these patients to return to the ED when challenged with new healthcare issues or a perceived failure of current issues to resolve in a timely manner. Daily alcohol use is associated with a decrease in risk of 30-day returns.However, two large retrospective cohort studies of older adults reported that a general history of substance abuse was an independent predictor of frequent ED use.Unfortunately, individual analyses for each of the substances of abuse that were included in these latter studies were not reported, making comparison of these disparate study conclusions difficult. Thus, it is unknown if daily alcohol use might confer a different risk compared to other substances of abuse.The Institute of Medicine defines health literacy as “the degree to which individuals can obtain, process, and understand basic health information and services they need to make appropriate health decisions.In older adults, low health literacy has been linked to decreased use of preventative services, higher utilization of acute care settings and resources, and poorer health outcomes.Over 70% of elderly patients are not questioned on their ability to care for themselves prior to discharge; 20% disclose that they do not understand their discharge instructions.This subset of the older adult population may have difficulty comprehending and following their discharge instructions. This may lead some patients to return when their initial complaints do not improve due to uncertainty and lack of comprehension regarding their discharge diagnoses, treatment, and follow-up plans.Several studies indicate poor cognitive health also is an important driver of ED returns.Older-adult patients with cognitive and memory impairment were at an increased risk for 30-day returns, and several studies demonstrated it to be an independent predictor for these returns.However, Ostir et al. found that poor cognitive health and odds of 30-day revisits did not have a significant association.Although, Ostir et al. did find that higher cognitive health scores were linked to lower risk for unplanned ED revisits at 60- and 90-days post-index visit.

The authorsfound that every one-point increase in cognitive score was associated with 24% and 21% decreased odds of 60-day and 90- day revisits to the ED, respectively. The lack of significant association between poor cognitive health and increased 30-day returns by Ostir et al. may be explained by several differences in the study population, which was mostly female , African American , and with cognitive impairment . The average cognitive score of these patients was 4.5 points below standardized norms for persons 65 years and older,35 whereas 76.8% of the study population in the McCusker et al. study had no impairment or only mild cognitive impairment.Only 18.7% of patients in the de Gelder et al. study were found to have cognitive impairement.Since nearly all patients in the Ostir et al. study had cognitive impairment, their findings may be due to the lack of an adequate comparison group. There are several possible explanations why patients with poor cognitive health may be at increased risk for recidivism,indoor garden table including suffering from more complex comorbidities necessitating more frequent healthcare, decreased comprehension of ED discharge diagnoses and instructions, and decreased accuracy in reporting of presenting illness. Patients with delirium superimposed on dementia were found to have lower concordance with their surrogates regarding reason for ED presentation reported to ED staff.This discordance between presenting complaints may lead to insufficient evaluation, missed diagnosis, and/or inappropriate discharge, particularly when the surrogate is not available during the ED evaluation. In addition to cognitive health, poor physical function and poor general health also increase odds of returning within 30 days, and may be an independent predictor for ED recidivism.As physical functioning is a well-established predictor of outcomes among elderly patients, these findings likely reflect the characteristics of a sicker aging population. Several studies have shown that patients, despite access to care , prefer to seek care in the ED compared to the outpatient setting.Reasons include the following: accessibility/convenience; perceived urgency of complaints; inability to wait for scheduled primary care follow-up due to worsening of persistence of symptoms; expedited diagnostic testing; perceived availability of specialists; lack of transportation to primary care office; and wanting a second opinion, among other reasons. In a study of the general ED population, uninsured patients were not found to use the ED more than insured patients, but they use other types of care less. Interestingly, both the insured and uninsured visit the ED at similarly high rates for non-emergent complaints or complaints that can be treated in non-ED settings.As discussed previously, patient fear or uncertainty likely plays an important role in understanding why patients come to the ED. This sense of uncertainty regarding the cause of their symptoms is best illustrated by Castillo et al.’s findings of a rather high rate of older adults returning to the ED for the same primary diagnosis and many seeking care at a different facility , perhaps in hopes of finding a different conclusion from their index ED visit.In a qualitative study of 40 adult patients with chronic cardiovascular disease or diabetes, patient reported driving factors for ED returns included feeling a sense of fear or uncertainty with negative test results and expecting a diagnosis for their symptoms.

Many patients who did not receive a clear diagnosis for their symptoms reported needing to return until a diagnosis was found.In two studies of older adults, patients were less likely to consider that their complaint has been completely resolved and believed they would be less independent after discharge from the ED.A survey of 15 older adults also linked patient perception of ED care with ED recidivism, including believing that the ED was their “only option” and that their symptoms required specialized care only provided in the ED.Several patients also reported that they believed their primary care physician would have advised them to seek care in the ED for their symptoms. Others reported receiving ineffective treatments or instructions at the time of ED discharge. In some cases, this perception may stem from inadequate patient counseling regarding expectations and reasonable goals of care and that can be achieved during the ED visit.The older adult population is a key and significant contributor to ED recidivism and is responsible for a disproportionate amount of healthcare costs. For this reason, older adults have received much attention and study to create interventions aimed at reducing ED recidivism. The unique characteristics of this patient group should be considered when developing strategies to minimize ED returns. The generation of a profile for elderly patients at increased risk for ED returns could identify potential targets for individualized education, counseling, and other interventions to reduce ED over-utilization. Many of the study results discussed in this review were performed outside the U.S. and thus may not be fully generalizable to older adults residing in the U.S. due to different social and cultural influences and healthcare systems. However, when data was available for comparison, studies performed in the U.S. identified many similar risk factors for return visits in older adults as the non-U.S. studies. These similarities suggest that the underlying reasons for ED utilization by older adults may be influenced more by themes related to aging rather than the cultures or healthcare models of individual countries. However, it is important to note that these studies were all performed in highly developed countries with stable economies and well established healthcare systems. Therefore, whether the identified risk factors would remain true in developing countries with fewer healthcare resources is unknown and deserves further study.Further study is needed to understand how each of these areas influences return visits, how they influence each other, and to resolve discrepancies in previously reported findings.Academic medicine faces a challenge on how to balance the objectives of revenue production with compensation of scholarly achievement. Historically, “relative value units” have been used to incentivize physicians to improve clinical productivity, but these systems have neglected to recognize non-clinical achievements, such as those related to teaching, academic leadership roles, or other scholarly activity. Many non-clinical activities do not earn a reduction in clinical hours or financial incentive, which may result in decreased motivation to contribute academically as well as frustration and burnout.

The group based treatment model is similar to outpatient treatment programs nationwide

Their finding may help explain the relative higher demand for alcohol than for cigarettes among treatment-seeking smokers with AUD in the present study, because our participants were more dependent on nicotine than those non-treatment seeking heavy drinking smokers in their study —the relatively higher level of smoking in our sample may have resulted in greater alcohol demand in an asymmetric fashion. An important study factor that should be taken into account is the differential alcohol and smoking satiation statuses among the participants. Although our participants were instructed to complete the hypothetical purchase tasks in a general context, we cannot rule out the possibility that the reported demand patterns may have been influenced by their alcohol and smoking statuses. Previously, we speculated that the special characteristics may have caused the null correlations between alcohol demand and alcohol related measures. Unlike other alcohol-related measures, alcohol withdrawal scores were correlated with alcohol demand metrics, which support the possibility that alcohol deprivation status may have indeed increased the reported demand for alcohol among our participants who experienced more alcohol withdrawal, consistent with a previous study which showed the increased cigarette demand among nicotine-deprived smokers . In the current study, we also found that cigarette demand metrics were positively correlated with smoking withdrawal, which suggests an increased demand for cigarettes due to smoking deprivation. However, the exact effects of alcohol deprivation on alcohol demand are more speculative with the current study design ,greenhouse tables which can be examined in future studies that contrast the alcohol demand metrics between deprived and satiated patients with AUD.

The study has the following limitations. First, the APT and CPT were administered separately, with each having no assumption of allocating limited resources to the other. Although our findings suggested that alcohol had higher demand than cigarettes using the single-commodity tasks , we do not have direct evidence that alcohol is preferred if both drugs are considered in the same context. Such relative preference between two co-used drugs can be best captured by a cross-commodity task wherein the consumption patterns for both drugs are examined simultaneously. Using the cross-commodity paradigm, researchers have found a complex interplay between cannabis and alcohol use with nontrivial proportions of the study sample showing patterns of complementarity, substitution, and independence . However, in a different cross-commodity study involving marijuana and tobacco cigarettes, researchers found an independent demand pattern between these two drugs . These studies suggest the manipulation robustness of using the cross-commodity paradigm in substance use research to simultaneously study couse of drugs. More importantly, this paradigm provides a better ecological validity by placing participants in a more realistic context with their access to both drugs while having limited shared resources. Future studies should consider using this cross commodity paradigm to better capture the demand for alcohol and cigarettes among smokers with AUD, which may shed light on developing personalized treatments based on relative demand patterns between alcohol and cigarettes. Second, to make the participants have similar contexts for the APT and CPT, the APT’s instruction used the same contextual description as the CPT’s, and differences in the current APT’s instructions from previous studies may have affected participants’ ability to report their alcohol demand with ecological validity. Previous studies have generally assessed alcohol demand under contexts in which alcohol is likely to be consumed .

Similarly, time parameters such as duration of access and weekend vs. weekday have been shown to impact alcohol demand. Third, per protocol requirements, participants were abstinent from alcohol to have proper cognitive functionality to complete the visits, but they could smoke ad libitum. Thus, differences in alcohol deprivation and smoking satiation may have affected the demand for alcohol and cigarettes. Alcohol appeared to have higher relative reinforcing efficacy than cigarettes among adult smokers with alcohol use disorder, as evidenced by their greater demand for alcohol than for cigarettes, although it is possible that acute substance status may play a role in modulating the demand for alcohol and cigarettes. A two-factor structure was identified for both alcohol and cigarette demand curves, and the differential loadings of demand indices in the current population of heavy drinking smokers and other less dependent younger samples assessed previously suggest a distinct demand pattern for smokers with AUD. As an important future direction of the present study, hierarchical multiple regressions analyses of multiple purchase tasks should be conducted to provide a deeper understanding of cross substance demand for alcohol and cigarettes among treatment seeking smokers with AUD.Health care reform in the United States has had major implications for people with substance use disorders , including greater opportunities to enroll in private insurance coverage, increased access to services, and changes in health care costs . The Affordable Care Act  established state insurance exchanges to promote and offer health coverage, and mandated SUD and psychiatric disorder treatment as essential benefits. Practitioners expected these ACA mandates, implemented in 2014, to increase access to care . Following ACA implementation in 2014, the overall number of individuals living without insurance dropped . Evidence suggests a positive impact of the ACA on both SUD and psychiatry coverage , including an increase in insurance choices .

The number of individuals with identified SUDs enrolled in health plans increased . But access to services remains a major concern , and much is still unknown regarding how ACA-associated enrollment through insurance exchanges and cost-sharing structures are associated with access to and use of SUD treatment and other health services in this complex patient population. SUD treatment initiation and retention are key clinical goals for SUD patients . Specific characteristics of the ACA, such as enrollment via new state insurance exchanges and increased patient cost sharing via higher deductibles, may influence treatment differentially for people with SUDs who may be new enrollees . Patient cost sharing may adversely impact both initiation and retention. If SUD treatment and psychiatry services are viewed as discretionary and less essential than primary care, they may be especially vulnerable to cost-sharing mechanisms . A previous evaluation of SUD patients enrolled in the same California healthcare system found that compared to a pre-ACA enrollment cohort with SUDs, post-ACA SUD patients had more psychiatric and medical conditions and greater enrollment in high-deductible plans. Although this prior work did not examine patterns of health service utilization, the findings suggest that newly enrolled patients post-ACA may have greater clinical needs as well as increased financial obstacles to accessing services . It is important to not only evaluate SUD treatment initiation and retention over time following implementation of the ACA, but also to evaluate how factors related to the ACA may influence utilization of other health services. The current study aimed to extend what is currently known about the consequences of healthcare reform by examining the potential relationship of ACA exchange enrollment and high deductible health plans to trends in health service utilization in a cohort of individuals who were newly enrolled in a healthcare system and had a documented SUD. We examined factors associated with utilization as conceptualized by the Andersen model of healthcare utilization ,vertical farming which proposes that utilization is determined by predisposing need  and enabling factors . We hypothesized that psychiatric comorbidity would be associated with greater use of health services, and that members with higher deductibles would be less likely to initiate SUD and psychiatry treatment but would have higher emergency department and inpatient utilization than those without deductibles. As with earlier studies , which indicate that SUD diagnosis is often precipitated by a critical event such as an ED visit, we expected that post diagnosis utilization would be highest in the period immediately following diagnosis but would likely decrease over time, although trajectories would vary by type of utilization. Knowing how these factors are associated with use of healthcare can be highly informative to future healthcare reform and behavioral health services research. Kaiser Permanente Northern California is an integrated healthcare system serving approximately 4 million members . The membership is racially and socioeconomically diverse and representative of the demographic of the geographic area . SUD treatment is provided in specialty clinics within KPNC, which patients can access directly without a referral.Treatment sessions take place daily or four times a week, depending on severity, for nine weeks . Treatment in psychiatry includes assessment, individual and group psychotherapy, and medication management . KPNC is not contracted to provide SUD care or intensive psychiatry treatment for Medicaid patients and those patients are referred to county providers.

The University of California, San Francisco and Kaiser Permanente Northern California Institutional Review Boards approved the study and approved a waiver of informed consent. We identified common chronic medical conditions , many of which are known to be associated with SUDs using ICD-9/10 codes recorded within the first year after initial enrollment. Conditions included asthma, atherosclerosis, atrial fibrillation, chronic kidney disease, chronic liver disease, chronic obstructive pulmonary disease, coronary disease, diabetes mellitus, dementia, epilepsy, gastroesophageal reflux, heart failure, hyperlipidemia, hypertension, migraine, osteoarthritis, osteoporosis and osteopenia, Parkinson’s disease or syndrome, peptic ulcer, and rheumatoid arthritis. Patients with chronic medical conditions utilize more health services than patients without such conditions , which may influence their decision to choose a plan with a lower deductible if given an option , so we included this covariate to control for confounding. Deductibles are features across different benefit plans, including commercial plans, but are more common in ACA benefit plans. The individual deductible limit is the amount the individual must pay out of-pocket for health expenses before eligibility for health plan benefits. At KPNC, there are many types of benefit plans that include deductibles. Patients with deductible plans that do not include SUD as a covered benefit are responsible for bearing the cost of those services until their deductible is reached, and/or the accumulating cost of copays for multiple visits as part of the SUD care model. We did not include type of insurance as a covariate due to its collinearity with deductible limits and enrollment via the ACA exchange . We categorized deductible limits into three levels , as in prior research and based on the definition of high deductibles and benefit plans available at KPNC during this period. Since deductible limits may change over time, we used the minimum level over each 6- month time window during follow-up. We imputed missing deductible levels during a given 6-month window with the last known value during the follow-up period, and we dropped patients with no known deductible limit during the entire 36 months of follow-up from the analysis . Coverage mechanism included enrollment via the California Exchange vs. other mechanisms . We summarized utilization data into 6-month intervals, and we examined trends in health service utilization over 36 months after patients received an SUD diagnosis with Chi-squared tests using 6-month intervals. Using multi-variable logistic regression, we examined associations between deductible limits, enrollment via the California ACA Exchange, membership duration, and psychiatric comorbidity; and the likelihood of utilizing health services in the 36-month follow-up period, controlling for patient demographic characteristics and chronic medical comorbidity. We also evaluated whether enrollment via the California ACA exchange moderated the associations between deductible limits and the likelihood of utilization by adding interaction terms to the multi-variable models. We estimated the associations with deductible limits for each enrollment mechanism by constructing hypothesis tests and confidence intervals on linear combinations of the regression coefficients from these models. To account for correlation between repeated measures, we used the generalized estimating equations methodology . We censored patients at a given 6-month interval if they were not a member of KPNC during that time. We conducted a sensitivity analysis to determine whether high utilizers leaving the health system influenced the observed pattern of decreased utilization from the 0–6 month to the 6–12 month follow-up periods. Using Chi-squared tests, we compared utilization during the 0–6 month period between patients who remained in the cohort and patients who disenrolled from KPNC at 6–12 months. We hypothesized that if the censored group had greater utilization than the non-censored group, then there would be evidence of high utilizers leaving the health system. We also conducted Chi-squared tests to determine whether censorship was associated with deductible limits and enrollment mechanisms.

The risk pathway from anhedonia to marijuana use may be incremental to risk of other drug use

We have officially informed the local police on the study implementation and received approval from both national and local authorities. While this measure does not prevent participants from being arrested, especially when they are involved in illegal activities, it could reduce attrition. Furthermore, the COVID-19 pandemic and containment measures could pose challenges for the study implementation. With the response plan developed for potential interruption scenarios, we believe the study will be implemented safely and will maintain a high-level of data quality and intervention fidelity.Marijuana is one of the most widely used illicit substances world-wide. Although it has been reported that marijuana use rate has stabilized or even decreased in recent years in most high-income countries, the continuing high prevalence of use among adolescents and young adults is a cause for concern. Such emerging trends have heightened interest in the link between mental health problems and adolescent marijuana use to inform policy and prevention efforts. Understanding the comorbidity between psychopathology and marijuana use is complicated. Marijuana use is associated with numerous different psychiatric disorders, each of which tend to co-occur with one another . Additionally complicating matters is the potential bidirectional nature of this association, with evidence that marijuana use may both predict and result from poor zmental health. A parsimonious explanation of this comorbidity may be that a small set of transdiagnostic psychopathological vulnerabilities that give rise to numerous mental health conditions may also contribute to and result from marijuana use. Such transdiagnostic vulnerabilities may account for the pervasive patterns of psychiatric comorbidity with use of marijuana and other substances. One such transdiagnostic vulnerability is anhedonia— diminished capacity to experience pleasure in response to rewards.

As a subjective manifestation of deficient reward processing capabilities, anhedonia is believed to result from hypoactive brain reward circuitry. While anhedonia is a core feature in a DSM-defined major depressive episode,microgreen flood table it has also been linked to other psychopathologies comorbid with drug use, including psychosis, borderline personality disorder, social anxiety, attention deficit hyperactivity disorder and post-traumatic stress disorder and has therefore been proposed to be a transdiagnostic process. Departing from its consideration as a ‘symptom’ of a disease state as in DSM-defined major depression, anhedonia has also been conceptualized as a continuous dimension, upon which there are substantial inter individual differences. Individuals at the lower end of the anhedonic spectrum experience high levels of pleasure and experience robust affective responses to pleasurable events, whereas those at the upper end of this spectrum exhibit more prominent deficits in their pleasure experience. Anhedonia operates as a ‘traitlike’ dimension that is stable yet malleable, which is empirically and conceptually distinct from other emotional constructs, such as reward sensitivity , alexithymia and emotional numbing , sadness and negative affect. Recent literature documents a consistent association between anhedonia and substance use in adults. To the best of our knowledge, there has been only prior study of the association between anhedonia and marijuana use in youth, which found higher anhedonia levels among treatment-seeking marijuana users than healthy controls in a cross-sectional analysis of 62 French adolescents and young adults. Given the absence of longitudinal data, it is unclear whether anhedonia is a risk factor for or consequence of adolescent marijuana use. Because youth with higher anhedonia levels experience little pleasure from routine rewards they may seek out drugs of abuse, such as marijuana, which stimulate neural circuitry that underlie pleasure pharmacologically. Alternatively, repeated tetrahydrocannabinol exposure during adolescence produces enduring deficits in brain reward system function and anhedonia-like behavior in rodent models. In observational studies of adults, heavy or problematic marijuana use is associated with subsequent anhedonia and diminished brain reward region activity during reward anticipation.

Consequently, it is plausible that anhedonia may both increase risk of marijuana use and result from marijuana use. Because early adolescence is a period in which risk of marijuana use uptake is high and the developing brain may be vulnerable to cannabinoid-induced neuroadaptations, this study estimated the strength of bidirectional longitudinal associations between anhedonia and marijuana use among adolescents during the first 2 years of high school. The primary aim was to test the following hypotheses: greater baseline anhedonia would be associated with a faster rate of escalation in marijuana use across follow-up periods; and more frequent use of marijuana at baseline would be associated with increases in anhedonia across follow-ups. A secondary aim was to test whether these putative risk pathways were amplified or suppressed among pertinent sub-populations and contexts. Associations of affective disturbance and other risk factors with adolescent substance use escalation have been reported to be amplified among girls, early- onset substance users and those with substance-using peers. We therefore tested whether associations between anhedonia and marijuana use were moderated by gender, history of marijuana use prior to the study surveillance period at baseline and peer marijuana use at baseline.Youth with higher levels of anhedonia at baseline were at increased risk of marijuana use escalation during early adolescence in this study. In addition, levels of anhedonia and marijuana use reported at the beginning of high school were associated cross-sectionally with each other. To the best of our knowledge, the only prior study on this topic found higher levels of anhedonia in 32 treatment-seeking marijuana users than 30 healthy controls in a cross-sectional analysis of French 14–20-year-olds who did not adjust for confounders. The current data provide new evidence elucidating the nature and direction of this association in a large community-based sample, which advances a literature that has addressed the role of anhedonia predominately in adult samples.

The association of baseline anhedonia with marijuana use escalation was observed after adjustment of numerous possible confounders, including demographic variables, symptom levels of three psychiatric syndromes linked previously with anhedonia and alcohol and tobacco use. Consequently, it is unlikely that anhedonia is merely a marker of these other psychopathological sources of marijuana use risk or a non-specific proclivity to any type of substance use. The temporal ordering of anhedonia relative to marijuana was addressed by the overarching bidirectional modeling strategy, which showed evidence of one direction of association and not the other direction . Ordering was confirmed further in moderator tests showing that the association of anhedonia with subsequent marijuana use did not differ by baseline history of marijuana use. Thus, differences in risk of marijuana use between adolescents with higher anhedonia may be observed in cases when anhedonia precedes the onset of marijuana use. Why might anhedonia be associated uniquely with subsequent risk of marijuana use escalation in early adolescence? Anhedonic individuals require a higher threshold of reward stimulation to generate an affective response and therefore may be particularly motivated to seek out pharmacological rewards to satisfy the basic drive to experience pleasure,seedling grow rack as evidenced by prior work linking anhedonia to subsequent tobacco smoking escalation.Among the three most commonly used drugs of abuse in youth , marijuana may possess the most robust mood-altering psychoactive effects in young adolescents. Consequently, marijuana may have unique appeal for anhedonic youth driven to experience pleasure that they may otherwise be unable to derive easily via typical non-drug rewards. The study results may open new opportunities for marijuana use prevention. Brief measures of anhedonia that have been validated in youth, such as the SHAPS scale used here, may be useful for identifying teens at risk who may benefit from interventions. If anhedonia is ultimately deemed a causal risk factor, targeting anhedonia may prove useful in marijuana use prevention. Interventions promoting youth engagement in healthy alternative rewarding behaviors without resorting to drug use have shown promise in prevention, and could be useful for offsetting anhedonia-related risk of marijuana use update. Moderator results raise several potential scientific and practical implications. The association was stronger among adolescents with friends who used marijuana, suggesting that expression of a proclivity to marijuana use may be amplified among teens in environments in which marijuana is easily accessible and socially normative. The association of anhedonia with marijuana use escalation did not differ by gender or baseline history of marijuana use. Thus, preventive interventions that address anhedonia may: benefit both boys and girls , aid in disrupting risk of onset as well as progression of marijuana use following initiation and be particularly valuable for teens in high-risk social environments. While anhedonia increased linearly over the first 2 years of high school on average, the rate of change in anhedonia was not associated with baseline marijuana use or changes in marijuana use across time. Given that anhedonia is a manifestation of deficient reward activity, this finding is discordant with pre-clinical evidence of THC induced dampening of brain reward activity and prior adult observational data, showing that heavy or problematic marijuana use is associated with subsequent anhedonia and diminished brain reward region activity during reward anticipation.

Perhaps the typical level and chronicity of exposure to marijuana use in this general sample of high school students was insufficient for detecting cannabinoid-induced manifestations of reward deficiency. Longer periods of follow-up may be needed to determine the extent of marijuana exposure at which cannabinoid-induced reward functioning impairment and resultant psychopathological sequelae may arise. Strengths of this study include the large and demographically diverse sample, repeated-measures follow-up over a key developmental period, modeling of multi-directional associations, rigorous adjustment of potential confounders, high participation and retention rates and moderator tests to elucidate generalizability of the associations. Future work in which inclusion of biomarkers and objective measures is feasible may prove useful. Prevalence of heavy marijuana use was low in this sample, which precluded examination of clinical outcomes, such as marijuana use disorder. Students who did complete the final follow-up had lower baseline marijuana use and anhedonia, which might impact representativeness. Further evaluation of the impact of family history of mental health or substance use problems as well as use of other illicit substances, which was not addressed here, is warranted.Although researchers in sociology, cultural studies, and anthropology have attempted, for the last 20 years, to re-conceptualize ethnicity within post-modernist thought and debated the usefulness of such concepts as “new ethnicities,” researchers within the field of alcohol and drug use continue to collect data on ethnic groups on an annual basis using previously determined census formulated categories. Researchers use this data to track the extent to which ethnic groups consume drugs and alcohol, exhibit specific alcohol and drug using practices and develop substance use related problems. In so doing, particular ethnic minority or immigrant groups are identified as high risk for developing drug and alcohol problems. In order to monitor the extent to which such risk factors contribute to substance use problems, the continuing collection of data is seen as essential. However, the collection of this epidemiological data, at least within drug and alcohol research, seems to take place with little regard for either contemporary social science debates on ethnicity, or the contemporary on-going debates within social epidemiology on the usefulness of classifying people by race and ethnicity . While the conceptualization of ethnicity and race has evolved over time within the social sciences, “most scholars continue to depend on empirical results produced by scholars who have not seriously questioned racial statistics” . Consequently, much of the existing research in drug and alcohol research remains stuck in discussions about concepts long discarded in mainstream sociology or anthropology, yielding robust empirical data that is arguably based on questionable constructs . Given this background, the aim of this paper is to outline briefly how ethnicity has been operationalized historically and continues to be conceptualized in mainstream epidemiological research on ethnicity and substance use. We will then critically assess this current state of affairs, using recent theorizing within sociology, anthropology, and health studies. In the final section of the paper, we hope to build upon our ”cultural critique” of the field by suggesting a more critical approach to examining ethnicity in relation to drug and alcohol consumption. According to Kertzer & Arel , the development of the nation states in the 19th century went hand in hand with the development of national statistics gathering which was used as a way of categorizing populations and setting boundaries across pre-existing shifting identities.

The demographics and substance use patterns of our sample limit generalizability of our findings

The 3- and 6-month interview guides were shorter and focused on changes in life situations, health practices, social capital, substance use, and resilience that were observed in their quantitative measures. These guides were designed to help us understand any changes in our variables of interest and how they influenced self-management behaviors.Data analysis of the qualitative and quantitative data occurred at the same time, but were not integrated until both types of data were analyzed. In analyzing the quantitative data, we first assessed the distribution of all quantitative variables . We summarized baseline characteristics by using means, standard deviations, medians, interquartile ranges, and counts and percentages of women by substance use group , depending on the variable’s distribution. We used GEEs with an identity link function and an unstructured correlation structure to describe how social capital and substance use influences HIV self-management across the three time points. Separate models were fit for each HIV self-management outcome. In addition to the effect of social capital and substance use, we examined independent effects of age, discrimination, and traumatic events by adding these covariates to GEE models. All statistical analyses were conducted using Stata 14.0 with p values < .05 considered statistically significant. Qualitative data were managed using the qualitative data analysis program Dedoose and was analyzed by the research team using qualitative description methodology . Data were transcribed and examined by two research team members who coded the data using the constant comparative method, identifying patterns and themes . These team members met regularly during coding to discuss consistencies and inconsistencies in the data. A priori codes related to social capital, substance use,greenhouse grow tables and self-management based on our literature review were initially applied, and then inductive codes were applied.

Transcripts were revisited in a series of iterative steps to confirm coding classification and that theoretical saturation was reached. Variations on the themes and negative cases were identified to help understand the full range of data within codes. A final codebook of themes, definitions, and exemplar codes was created to aid analysis. Data were coded and analyzed using Dedoose version 8.0.42 . Study procedures are presented consistent with the Good Reporting of a Mixed Methods Study standards .In our mixed methods study examining the influence of social capital on HIV self-management among WLHIV, we observed that social capital is important for self-management, and we were able to integrate new qualitative data on how social capital does this. Social capital has consistently been linked to improved health outcomes among adults living with HIV, but what has been missing from the literature is how it does that. Our quantitative data are consistent with this literature and clearly demonstrate that better social capital is associated with better self-management in WLHIV. Yet by qualitatively examining the components of social capital in-depth, we describe how three key components of social capital can improve HIV self-management in this population–trust as a powerful yet scarce resource, a WLHIV’s community directly influences that trust, and having a strong value of self. Each of these components required that WLHIV actively and positively engage with their social network. However, for women trying to overcome a substance addiction, this can be particularly challenging since aspects of her social network can trigger substance use either directly or via social capital mechanisms we describe. Furthermore, being identified as a current or former substance user may fracture existing social networks or prevent WLHIV from being more connected to their community, which could influence their access to certain types of social capital.

Our qualitative data suggest that rebuilding a strong social network, one that enhances trust in others and in oneself, increases engagement with her community, and ultimately helps a WLHIV believe in her value as a person. Our data also provide insight into how nurses can help enhance social capital in this population, including having members of the health care team spend the time necessary to earn and keep the trust of WLHIV. Our quantitative data suggest that such efforts may help to improve HIV self-management behavior in this population. Recently, investigators described the importance of building trust in HIV care and engagement over time . Our data support those findings and highlight that the long-term trust-building process is critical for those living with chronic HIV infection, and perhaps this process may be even more critical among highly vulnerable populations. However, our qualitative data also reveal other ways to improve social capital, and obtain the benefits derived from it, that are more challenging to implement. We saw clear evidence that physical community can improve a WLHIV’s health behaviors. Whether offering tangible goods, information, kindness, or effective use of the school infrastructure, our participants derived much-needed resources from their community, which led to an increased sense of value. This increased sense of value motivated WLHIV to engage in HIV self-management behaviors to help improve their health. These data suggest that continuing to advocate for policies and resources to connect neighbors to one another and emphasizing our similarities can help improve the health of WLHIV. We also found quantitative evidence that WLHIV face challenges to engaging in HIV self-management that may be influenced by recent traumatic events. While this is consistent with other studies that highlight that levels of trauma exposure influence HIV outcomes, lifetime trauma is also ubiquitous in this population. In high-resource settings, such as ours, trauma and interpersonal violence are estimated to be experienced by 68% to 95% of WLHIV . Recognizing the influence of trauma on poor health outcomes in WLHIV and recognizing that trauma can be successfully treated, clinicians and advocates are adopting trauma-informed care models for HIV care. Trauma-informed care models emphasize that both the clinician’s and the individual’s recognition of and response to trauma and create an environment that is safe and empowering for WLHIV .

Our quantitative and qualitative data suggest that promoting social capital both within the clinic setting and in the community may temper the negative impact of trauma and provide previously untapped avenues for addressing substance use with WLHIV. However, we also found differences between our findings and existing literature. A key difference is that we did not find diminished HIV medication adherence between current and previous substance users. Substance use is considered one of the main barriers to achieving higher rates of viral suppression when an HIV diagnosis is established . The use of different substances in individuals with HIV is associated with lower antiretroviral therapy adherence ,cannabis growing system increased missed clinic visits , and decreased knowledge of HIV status . This previous research suggests that fundamental resources such as money, time, and energy will mainly be used to acquire and use substances with little attention directed to self-care. While we observed a relationship between substance use and global HIV self-management, we did not observe a relationship between substance use and HIV medication adherence. There are several possible explanations for this. First, the field of HIV has done a phenomenal job of teaching all PLHIV of the primary need to take HIV medications every day. As the medications have improved and many PLHIV are taking one HIV medication once a day, it has gotten easier to adhere to these medications. So despite many WLHIV facing personal and structural barriers to HIV medication adherence, the importance of adherence coupled with simplified regimens may help them overcome these barriers. In addition, our sample of volunteer participants is small, and though we saw a negative effect of substance use on HIV medication adherence, our study may have been under powered to detect a statistically significance effect. In addition to our small sample size, there are several other limitations that should be considered. First, all WLHIV were recruited from a single site in the Midwestern United States.We also did not use member checking to help enhance the rigor of our findings. However, we tried to overcome these limitations by employing several strategies including triangulating both qualitative and quantitative data, having prolonged engagement between the community of WLHIV and research team, and having multiple team members engaged in our data integration. Integration of quantitative data with our rich qualitative data led to new insights into how social capital can be fostered among WLHIV and how it can be used to overcome challenges faced by them. This would not have been possible without data integration. In conclusion, social capital was associated with better HIV self-management and HIV medication adherence over time, perhaps offsetting the negative effects of substance use.

Social capital increased trust, fostering a strong sense of community, and helped WLHIV feel valued. These findings enhance understanding of how nurses can support WLHIV who are addicted to illicit substances and to help them maintain sobriety and improve their HIV self-management.HIV infection is a global pandemic and the population is growing due to successful treatment with highly active antiretroviral therapy. Although rates of HIV have been reduced in the United States among most groups as a result of successful public health efforts , sexual risk behavior and subsequent acquisition and/or spread of HIV and other sexually transmitted infections are still of concern among men who have sex with men as well as drug using populations. Thus, it is evident that, despite research and efforts to understand and curb sexual risk behavior within these vulnerable populations, additional work employing novel approaches are needed. Sexual risk behaviors can be viewed as a composite of numerous behaviors that collectively make-up a complex behavioral phenotype. As with most complex phenotypes, sexual risk behavior is heterogeneous and several factors contribute to the variance that can be observed from one individual to another. To date, a majority of work examining risk factors for sexual risk behavior phenotypes have primarily focused on psychosocial factors and/or other complex/heterogeneous behavioral phenotypes such as substance use behaviors as indicators for current or future sexual risk behavior. Ultimately these indicators, upon sufficient replication, become candidates for public health interventions that aim to prevent and reduce sexual risk behaviors. However, the trouble with many of these candidates is that they are too proximal to sexual risk behaviors and often cooccur, making it difficult to disentangle temporal precedence and ultimately limit prevention efforts. One relatively novel approach is to examine intermediate phenotypes or endophenotypes such as neurocognitive factors as well as biological factors. These factors are more distal to the onset of sexual risk behavior and thus are potentially more advantageous candidates for identifying vulnerable individuals and informing prevention efforts for sexual risk behavior. Studies in literature examining neurocognitive and biological factors as indicators for sexual risk behaviors are limited. In fact, only two studies to date have examined neurocognitive factors and none to our knowledge have examined biological factors as potential indicators. Although this paucity of research is surprising given previous work linking both neurocognitive and genetic indicators to other health related behaviors, research has established the dopminergic system as a common link between neurocognitive functioning and sexual behavior. The dopminergic system has been shown to be involved in sexual arousal, motivation and the subsequent rewarding effect of sexual behavior . Furthermore, DA in the human brain, specifically in the prefrontal cortex , has been shown to be necessary for proper cognitive functioning to occur and high or low levels of DA in this brain region are known to contribute to individual cognitive differences in humans. The PFC is of particular importance when examining risk behavior in that executive functions such as decision-making, planning, self-monitoring as well as behavior initiation, organization, and inhibition are largely dependent on PFC integrity. Impairment in executive functioning may result in difficulties in assessing relationships between a person’s current behavior and future outcomes; thereby resulting in choices and/or responses on the premise of immediate rewards versus long term consequences and an ultimate potential increase in the likelihood for participation in sexual risk behaviors. Thus, mechanisms responsible for maintaining a dopamine balance within the brain and in particular the PFC would appear to be good biological candidates for further exploration of an association between executive dysfunction and sexual risk behavior. One such candidate is catechol-O-methyltransferease which is a mammalian enzyme involved in the metabolic degradation of released dopamine, particularly in the PFC.

Variation occurs along the temporal dimension as well

Maria responded by cutting on herself. In this instance, we have a patent metaphoric and literal alignment of psycholinguistic and palpablebodily expression. These events have created major emotional and psychiatric challenges for Maria. When Maria began the study, she had been admitted for a suicide attempt and ongoing postpartum depression. Prior to being admitted, she had an eating disorder, along with self-harming and abusing cannabis for two years. Her SCID diagnosis shows mood disorder due to a general medical condition, postpartum major depressive disorder, brief psychotic disorder related to postpartum depression, separation anxiety disorder, PTSD, alcohol abuse, cannabis dependence, and an eating disorder.For Maria, cutting was an intended if fraught means of communication in the face of the emotional pain of abandonment. This was not the first time she cut; her practice began at age 11 following the sexual assault by her mother’s boyfriend. In the narrative excerpt above, her motivation was explicit and her logic clear when her father proved unresponsive to her telephone call. In semiotic terms, cutting was a concrete bodily hurt that stood as a sign, the object of which was her emotional hurt, and the interpretant of which was her need for emotional connection. Along with bulimia that resulted in weight loss and “attention from guys,” it formed a complex related to self-esteem and the need for intimacy from males in the context of a close but troubled relationship with a mother marked by alcoholism. Though cutting proved ineffective in communicating with her stepfather,rolling flood tables it was apparently effective in a negative sense by addressing her emotional pain.

In this sense for Maria cutting was anagentive practice and bodily technique operating in tandem with bulimia—one technique to take away pain and the other to gain attention—against the background of multiple interpersonal traumas. Finally, she was able to evaluate bulimia as something that worked, but in a bad way. Secrecy and isolation are themes for her even though her mother and aunt discovered her actions and initiated a trajectory of consultation with a school counselor leading to hospitalization; in fact, Maria had already spoken to the counselor before this event without telling her mother. It was her mother’s contact, however, that led the counselor to suggest treatment. Maria insisted that she had not cut herself since leaving the hospital. Dana was a 12-and-1/2-year-old Hispanic and African American girl who lived in a small town south of Albuquerque with her adoptive parents, younger brother, maternal uncle, and the uncle’s fiance. Dana was adopted with her brother Jordan at the age of five. She had five younger siblings with whom she still had contact. Dana and Jordan were originally placed with a family in Las Cruces, but they were sent to their current home because that family only wanted Jordan. Their adoptive parents suspected a history of sexual abuse because Dana would “play with herself” when she first arrived. Dana was diagnosed with ADHD at the age of five. She reported having depressive feelings since the first or second grade, even having suicidal ideation in the third grade. She was placed in Treatment Foster Care in a nearby town for one-and-a-half years, from the third through fifth grades when she threatened to kill herself. When she was eight years old, she threatened her adoptive mother with a knife, which led to TFC for another one-and-a-half years. She narrated that the change was positive for her initially, but that her depressive feelings intensified later on. In February 2008, Dana began being more aggressive to her adoptive parents, cutting herself and writing threatening letters. Her parents decided to send her to a respite for the weekend; in response, Dana threatened to physically hurt her father and was taken to the hospital by the police.

After returning home, Dana was better able to control her anger; however, this did not last—she engaged in behavior prohibited by her parents, stole from her school and from her parents, and was eventually suspended. Dana had been receiving psychiatric treatment for several years at the time of her participation in the study, including anger management and medication for ADHD. Her mother viewed much of Dana’s aggression as typical adolescent growing pains or in the mother’s words “that raging hormone period.” Her diagnostic picture from the KID-SCID included ADHD , oppositional defiant disorder , and major depressive disorder . We have presented and analyzed these vignettes with an emphasis on experiential specificity and on the importance of youth’s own voices under conditions of structural violence. Having examined the cutting experience of six among the 27 youths who narrated cutting and/or self-harm, it is evident that each has a highly distinctive profile while often invoking common themes of family relations4 and bodily experience, and we shall elaborate shortly a characteristic problematic of agency. Are these youths typical in any way, and if so typical of what? The challenges faced by many adolescents, certainly in the “Land of Enchantment” that is New Mexico’s self-description, are recognizable among these young people often in amplified form and complicated by additional factors that amount to extraordinary conditions both personal and structural . Their situations are often vulnerable and precarious, but there are various forms of vulnerability and precarity. They are, for example, not children who live “in the streets” like homeless children without families but children who are “in the system” with a trajectory back and forth from home to various settings of institutional care. These institutions vary along the axis of emphasizing what Hejtmanek has characterized as psychiatric custody and therapeutic process, terms that bear overtones of the carceral and the caring respectively.

Indeed, conditions in some of the facilities where we interviewed study participants were sufficiently oppressive to count as just as much a form of structural violence as conditions of poverty, gender violence, and gang activity. Yet the larger scale politics of health care created another form of structural violence in the form of severe contraction of services under the regime of “managed care” that was ongoing throughout the duration of our project. Payment for both residential treatment and day treatment was approved with decreasing frequency, and the average length of covered stay decreased drastically. From the standpoint of CPH clinicians, this meant that patients were often being discharged to disorganized family environments which did not provide sufficient opportunity for their condition to stabilize or to less intensive levels of care for which they were not prepared . Yet whether the experience leans toward the carceral or the caring depends not only on the character of the institution but on the different pathways into the hospital including through the police, the courts, physicians, families, and in some instances, volunteering. Once in the system, all are exposed to and inculcated with discourses of diagnosis, coping skills, and medication.Finally,flood and drain tray although cutting is prominent among these youths who have been psychiatric inpatients, on the one hand not all of them are cutters and on the other not all cutters come to be psychiatric patients.What is critical in making anthropological sense of their experience is that suffering is not a barrier to interpretation and understanding because it partakes of the broader spectrum of human experience. Moreover, while we have a specific existential, ethical, and political concerns for the “extraordinary conditions” of this particular group of adolescent self-cutters who are psychiatric inpatients , their experience enacts and partakes of “fundamental human processes” and may highlight them in a way from which we can learn as much about the human condition as about a distinct pathological or cultural process. In other words, regardless of how troubled any one of them might be or appear to be, a careful look at their experience reveals the operation of fundamental human processes in a way that allows them to be seen not just as idiosyncratic individuals or representatives of a marginal category of afflicted subjectivity, but as having much in common with those who might more readily be classified as “typical.”With these considerations in mind, we must outline the range of issues that define the domain of cutting for these youths in treatment as a first step in understanding similarities and differences in their modes of bodily being in the world. Is cutting a learned behavior, and if so can it be called a “technique of the body” in the sense in which Mauss used that term? The answer is yes in situations where it is associated with the cultural complex defined by young people who define themselves as “emo,” “goth,” or “scene.” In this circumstance, the delicate cuts are, as one participant’s mother said, like a “badge of honor.” There is indeed an element of technique evident in one girl’s report that while hospitalized another girl patient told her “you are cutting yourself the wrong way, you are supposed to cut down.” Particularly among SWYEPT participants, this learning could take place among peers in the hospital or residential care facility as well as at school or from siblings at home, and the mother of one of our male participants acknowledged that all three of her sons were “cutters.”

Nevertheless, it is possible for cutting to be primarily a self-discovered practice, evident in one girl’s comment that “I was shaving my arm and I accidentally cut myself and I liked the way that it felt and that is when I started cutting. That is when I started purposefully cutting myself on my wrist.” These findings compare with a study of participants in online message boards that indicated a substantial group of cutters who had never heard of the practice before engaging in it, some even reporting they thought they “invented” it, not knowing they would feel better before they cut for the first time even if it was accidental, while a third of respondents had heard of or knew someone who cut before they began; self-learners typically began cutting at age 16 while those who learned from others began at age 14 . Cutting as an Emo technique is also most often associated with the apparently careful use of a razor blade and fits the model of “delicate cutting,” whereas among SWYEPT participants, there was in addition a range of implements used: fingernails, pencil, knife, toothpick, thumbtack, scissors, paperclip, binder ring, and broken glass. Using such a range of implements is not unique to these youths . Also in relation to Emo/Goth culture, cutting stands in relation to tattooing and body-piercing, the principal diacritics being that the latter are typically done by others and not by oneself and that the latter are often for performative display while cutting is typically concealed.Girls who wear “lots of bracelets” may be both adorning themselves and concealing the scars on their wrists. Placement is stereotypically on arms and legs, wrists and ankles, and one is inclined to interpret as more idiosyncratic instances such as those we recorded of poking under one’s fingernails, cutting one’s thumb, or cutting one’s stomach. Hodgson’s survey respondents often tried to pass by concealing their scars or created cover stories but sometimes also disclosed their cutting with an excuse for doing something wrong or a justification that it was a way to deal with emotional pain, but these disclosures did not include display as with stylized body modification. With respect to severity, the continuum between delicate and deep cutting is significant among participants. On the mild end of the continuum, there are reports of scratching without drawing blood. Even dangerously deep cutting may be unintentional and, in the words of one mother, an instance of “going overboard” rather than aimed at serious self-harm or suicide. Likewise, even superficial cuts can be overdone, as in the report by one mother that her daughter had cut herself lightly with 63 times on various parts of her body. A final element of excess is the instance in which a boy carved his name in his leg and another in which a girl carved her boyfriend’s name in her arm. These are perhaps too conveniently expressive of gender stereotypes, specifically of the narcissistic boy and the infatuated girl.Onset of cutting can occur at quite a young age, and its duration varies as well.

Traditional training is the common pedagogical method for learning clinical skills

Rather than allocating points proportionally according to the results obtained, the mBAS is reversed, going from 1 to 5 . Overall scores ranged from 5-25. A passing score of 14 or lower was also set by the experts. A failing score was above 14. Assessments were made in two rounds. In the first round, raters independently rated the video. If items were adjacent raw disagreements between raters , they watched the video together, discussed it, and scored it again. Medical educators aim to identify the best methods to prepare students for clinical practice.Trainees rarely learn BBN in real clinical practice due to the paucity of opportunities32,50 and the fact that clinical preceptors are rarely available to give feedback.At pre-test, our study shows a low level of participant experience and a lack of BBN skills, especially in the TG. Chiniara et al.define the “simulation zone” as areas in which simulation education may be better suited than other methods. BBN is an example of the HALO quadrant: high impact on the patient and low opportunity to practice. This feasibility study assessed the impact of a four-hour ED BBNSBT compared to clinical internship. It was hypothesized that BBNSBT would have the potential to increase participant self-efficacy in BBN communication and management,grow rack systems adherence to BBN stages and processes, and to improve communication skills during BBN. Our results revealed that this training increased self-efficacy perception. Participants had a low level of self-efficacy in pre-test.

After the BBNSBT, the TG reported being more confident about their knowledge and application of BBN and about their ability to perform BBN compared to the CG. This confirms the results of another, smaller study , which showed an improvement in confidence and self-efficacy.These findings may be explained by Bandura’s social cognitive theory,which suggests four ways to enhance self-efficacy that we identify in the BBNSBT: 1) enactive attainment ; 2) vicarious experience ; 3) verbal persuasion ; and 4) psychological safety during the simulations. Moreover, the perceived self-efficacy of students in the CG with more clinical experience decreased. This result could have different potential explanations, notably that the pre-test may have led to introspection and reflection about their BBN and communication skills. Communication with patients and their families is one of the Accreditation Council for Graduate Medical Education Milestones for EM residents, specifically the fourth level of BBN.Our research used two validated assessment tools that allow for standardization of the evaluation and training. The results demonstrate that BBNSBT using role-playing and debriefing enhances participant BBN learning and performance compared with the traditional learning paradigm and direct immersion in acute clinical situations. BBNSBT offers the opportunity to teach BBN and communication skills to students and young residents in a psychologically safe environment, preventing harm to patients and family members. It allows each participant to announce bad news and observe several BBN simulations with debriefings. By contrast, in the traditional curriculum role modeling at the bedside could have a negative impact on patients and relatives when medical students or residents engage in inappropriate communication behaviors,such as not keeping patients or family members adequately informed or using medical words they do not understand. More students in the TG reached the cut-off scores: 73% for SPIKES and 62.2% for the mBAS vs 45.2% and 35.5% in the CG.

These results demonstrate the relevance of BBNSBT in communicating bad news in the ED. However, the difference between the groups for the mBAS cut-off score is not significant. BBNSBT probably focuses more on SPIKES than on communication behavior. It may be necessary to create an advanced course centered on communication skills rather than on SPIKES. Despite this, BBNSBT offers experiential learning for participants. From the simulation experience, the debriefing process leads students to explore their frames, incorporate new frames such as SPIKES skills, and re-practice these new skills. This process allows knowledge to be acquired through experience.Moreover, participants had access to ED BBN experts for four hours, which, unfortunately, is unlikely to happen in real clinical practice. Additional data analyses allowed us to address a new question: Is BBNSBT more useful for students with less than one year of clinical experience? We found a statistically significant difference in the pre-test. Students with limited clinical experience reached the same level of BBN skills as students with more clinical experience after the BBNSBT. The gap between these groups could be filled by simulation training, without the pitfalls of stress and discomfort of direct clinical exposure. No study has previously focused on this question. In fact, BBNSBT used a step–by-step process involving novice participants to bring them to a higher level. The first step involved theoretical explanations given via video, discussions, and lectures. Each simulation, and especially each debriefing, further enhanced the participants’ skills. One strength of the study is that we paid special attention to the theoretical background upon which the training and evaluation were based, using the widespread SPIKES28 theoretical model and the INACSL Standards of Best Practice for SimulationSM.Moreover, the simulations were well designed, the debriefings were standardized, and the facilitators were trained and experienced. We believe that it is mandatory to meet the INACSL Standards of Best Practice, as well as work with simulation experts to obtain positive results with simulation training.

The next steps for research and pedagogical method improvement can be identified based on these results. Further research is needed to investigate the role of an advanced course in BBN. As BBN is not a required skill for EPs, it would be interesting to investigate whether BBNSBT is feasible and effective in other areas such as obstetrics, intensive care units, etc. Finally, we think that e-learning preparation before BBNSBT, as described for a training on managing low urine output,could replace some of the in-person time.Myanmar, formerly Burma, and now administratively designated the Republic of the Union of Myanmar, is a sovereign state in Southeast Asia. Myanmar has a diverse—135 different ethnic groups—population of 53 million according to the United Nations Population Division.Recently, the military regime that long hampered the country’s development was replaced by a civilian government.Socioeconomic development in Myanmar lags far behind nearby countries, as does its healthcare system. There are shortcomings in maternal care, pediatric healthcare, and infectious disease treatment, as well as medical accessibility and quality.Strengthening medical systems by improving the standard of emergency care has been known to reduce the mortality and morbidity from both communicable and non-communicable diseases.A large proportion of the global mortality and morbidity rate from various diseases is found in low- and middle-income countries . Unfortunately, the emergency care systems required to address these shortcomings are not well established in most LMICs,rolling flood tables including Myanmar.Formal emergency care in Myanmar is only available in hospitals located in urban areas. Rural hospitals can provide only limited emergency care to patients.While preparing for an international sporting event, the Myanmar government started to formalize efforts to develop a formal emergency medicine training program.Apart from the formal EM training program in the capital city, Nay Pyi Taw, frontline healthcare facilities across the country are not capable of providing life-saving emergency care. In most rural hospitals, the outpatient department usually covers emergencies; there is no separate area or facility for emergency treatment. Rural hospitals offer access to few medical specialties with minimal, if any, laboratory services. Public prehospital ambulance transportation service is virtually unavailable in rural areas.Several tools have been used to evaluate emergency care capability. Most focused primarily on the availability of hardware or infrastructure rather than functional aspects of emergency care.10 Some researchers have tried to measure performance of EM practice in resource-limited settings, which has resulted in a demand for a comprehensive EM assessment tool for LMICs.Recently, a novel approach based on work in the field of obstetrics, called sentinel condition and signal function, was adapted for EM by the African Federation for Emergency Medicine.Based on this concept, the AFEM developed a standard preliminary tool called the Emergency Care Assessment Tool , which has been suggested to be more useful than previous evaluation tools in assessing EM systems.Our study incorporated the concept of ECAT as a tool to analyze Myanmar’s emergency care systems. We investigated the capability to deliver emergency care in different levels of hospitals located in several regions of Myanmar.This facility-based survey was conducted between February 7, 2018 –April 3, 2018. With the help of two Myanmar doctors and three nurses who were invited to Korea for training, survey sheets were distributed to the doctors in charge of emergency medical care at nine hospitals. Our primary criterion for selecting hospitals was access to e-mail and online messaging, at the time of survey, to allow for our interactions with them. The nine hospitals, including five at which our initial contacts were employed, were scattered in five states in Myanmar, and believed to partially represent both urban and rural regions .

The nine hospitals were grouped into three levels, according to the bed capacity of the hospital and the number of physicians . Survey sheets were prepared in English using ECAT and delivered to responsible officers by e-mail. ECAT encompasses six sentinel conditions that threaten life , and the related signal functions that alleviate them. The researchers explained the meaning of each question in the survey to the original five Myanmar contacts, and they, in turn, conveyed this information to the Myanmar doctors who took part in the study. In the case of any questions that were initially omitted on the completed surveys, clarification was provided, and the questions were then revisited and answered by the respondents. The survey included questions about the general status of each hospital, such as the number of staff members, the number of hospital beds, and the annual patient load. The remaining questions addressed the performance of emergency signal functions, the products for signal functions , and the availability of emergency facility infrastructures. We coded data using standard descriptive analyses with Microsoft Excel 2015 . Qualitative research methods involved thematic analysis of answers. In performing signal functions for each of the sentinel conditions, basic-level hospitals were revealed to be weak in trauma care. Among the 12 signal functions related to trauma care that are deemed essential in basic-level hospitals, more than two functions were unavailable at all four hospitals. One hospital could not provide half of the trauma-related essential signal functions [Matupi Hospital– trauma protocol implementation , pelvic wrapping, cervical spine immobilization, basic fracture immobilisation , immediate cooling care for burns, fracture reduction]. None of the four basic-level hospitals had the resources to treat burn patients or provide pelvic wrapping. The survey questions regarding infrastructure revealed that none had a specialized resuscitation area for critical patients, and three of the hospitals did not have a triage area. There was neither trauma protocol nor a cervical immobilization device at any of the hospitals. Most signal functions for the other five sentinel conditions were generally available in these basic-level hospitals, with the exception of treatment for common toxidromes, which only half could provide.Two of the four intermediate-level hospitals indicated that they could provide all emergency signal functions. The other two hospitals, however, were found to provide a limited set of signal functions. They did not have a trauma protocol nor could they provide reduction for patients with bone fractures. Cervical immobilization, pelvic wrapping, burn care, and treatment of compartment syndrome were also unavailable. Moreover, one hospital could not perform defibrillation or mechanical ventilation support, nor administer intramuscular adrenaline, which is important for cardiopulmonary resuscitation. Two hospitals could insert central venous catheters and gain intraosseous access, which is important in shock management. In terms of resources, only two of the four had a separate triage area for emergency patients. All four hospitals had an isolation room, an obstetric/gynecologic area, and a decontamination room. We surveyed hospitals on their reasons for non-compliance with signal functions, asking them to choose from among five possible causal factors. The first was training issues, taking the form of a lack of education. The second factor was related to the lack of availability of appropriate supplies, equipment, and/ or drugs. The third pertained to management issues, such as the staff being unfamiliar with the functions, and cases where other equivalent procedures could have handled the conditions.

The epidemiological results provide the first available data on MSI from a Rwandan hospital

Patient demographics demonstrate that a majority of MSI cases were male and younger than 35 years of age . Major mechanisms of trauma included RTAs , falls , and assault . Of those involved in RTAs, a substantial proportion involved motorcycles while over one-quarter of accidents involved a pedestrian being struck . The majority of patients were transported from another health facility , while other patients were transported from the street or from home . Clinical characteristics of this cohort in Table 1 demonstrate approximately equal numbers of open and closed fractures . The most common anatomical regions of these fractures and injuries included the lower extremity , upper extremity , craniofacial , abdomen-pelvis , and thorax . The most common abnormal vital signs included tachycardia , hypotension , and tachypnea . Approximately 1 in 10 patients had a Glasgow Coma Scale score of 12 or below, with 24 patients’ scores ranging from 9-12 and 14 patient scores ranging from 3-8 . Care delivery metrics divided between ED outcomes and in-hospital outcomes are shown in Table 2. In the ED, a trauma intervention was performed for approximately three out of every four patients . Most common trauma interventions included traction or splinting , wound care , and hemorrhage control . Antibiotics and tetanus antitoxin were also commonly administered for fractures , although they were more frequently given in the case of open fractures . Other common emergency procedures included analgesic medication , intravenous liquid infusion , and endotracheal intubation , along with less common interventions such as transfusion of blood products and oxygen supplementation.In over four out of every five cases, an emergency consultation was obtained ,cannabis grow equipment most commonly from orthopedics , acute care surgery , and neurosurgery.

In a majority of cases, laboratory tests and imaging tests were ordered . Nearly three of every four patients were admitted to the hospital , with the most common admitting wards comprising orthopedics , surgical , and neurosurgery . As seen in Table 2, analysis of in-hospital care and outcomes showed that a majority of patients required operative management . Most common procedures included open reduction , wound debridement , and closed reduction with external fixation . A lesser percentage of the in-hospital patients required intensive care after admission or suffered from hospital complications . Patient outcomes varied from discharges to transfers to deaths in hospital . Baseline characteristics in Table 3 highlight the similarities and differences among the total of 674 patients seen prior to and following the implementation of the EMTP. Several patient characteristics did not differ between the pre-EMTP and post-EMTP cohorts, including age, gender, proportion of open fractures, proportion of RTAs, heart rate, and systolic blood pressure. Overall, there was significant improvement in ED outcomes after the implementation of EM training at UTH-K. Results demonstrate improvement in the three outcomes of interest. Specifically, there was a decrease in the ED mortality prevalence in patients with MSI by 89.9%, from 2.51% to 0.25% . In the population of patients seeking emergency care for MSI, this study found significant improvements in mortality and complication rates, length of stay, and an array of secondary outcomes in association with the implementation of EMTP. The training curriculum taught by EM faculty is thought to have played a key role in the improvement of these outcomes. This curriculum included specific longitudinal educational trainings on the diagnosis and treatments of MSI provided through lectures and workshops that all residents completed. These findings help to demonstrate the potential importance of investing in the training of formal EM specialists to address the large burden of morbidity and mortality associated with MSI in LMICs.

It has been previously proposed that relatively simple interventions in areas such as emergency triage, communication, and education and supervision could lead to reductions in LMIC mortality in the ED, where up to 10-15% of all deaths occur.The study demonstrates a temporal association between MSI outcomes in the ED and the inception of an EMTP, underlining the importance of developing such programs. While many LMIC governments do not list EM in their medical education priorities, they could consider doing so to tackle the treatment of such a high volume of patients with acute health problems.Understanding the patient population, anatomical distribution of fractures, and mechanisms of injury could allow for more practical incorporation into the EMTP’s future MSI curriculum. This understanding may also aid in proper diagnosis and treatment of the growing burden of MSI cases, a critical step for improving patient outcomes. Moreover, these epidemiological results, to an extent, confirm those of another research team that studied traumatic injuries in Rwanda’s pre-hospital service, an epidemiological profile that showed nearly one-fourth of injured patients suffered from a fracture.3 Most importantly, the epidemiological patterns and EMTP results suggest the need for reducing MSI morbidity and mortality through expanding emergency care training programs. Although this evidence suggests an association with improved outcomes among patients with MSI with Rwanda’s first EM residency program, further prospective evaluation of cases with MSI are needed to demonstrate reliability of these improvements over time. Moreover, similar epidemiological and training evaluation studies are needed in other African countries to effectively understand and develop scale MSI treatments.Although we used formalized protocols, the design resulted in an inability to identify a proportion of cases due to incomplete medical records and some missing data among included cases, which could have biased the results.

Overall, it appears that some intervention data was prioritized and thus better collected in comparison to other interventions. For example, the fact that oxygen supplementation was recorded as less used than intubation, demonstrates an inherent bias in recording interventions that are now more commonplace in the EM setting. In another example, although the GCS and vital signs in the pre-EMTP group are slightly different, it is worth noting that preliminary results show both GCS and vital signs were better recorded in the post-EMTP group vs pre-EMTP group . As better documentation practices were emphasized during EMTP implementation, this improvement demonstrates the inherent differences between provider training in each group, which may have led to more accurate GCS scores and vital signs in the post-EM group. The present study was performed at a single tertiary-care hospital, which may limit the generalizability of the findings to health delivery venues with less resource availability. Furthermore,cannabis cultivation technology due to lack of detailed information on prehospital and interfacility care provided for patients transported from various origins, controlling for prehospital interventions was not possible. Future studies should attempt to account for such variables, especially given that a majority of patients presented from other facilities. Future studies should also attempt to differentiate patients based on varying levels of acuity, as this study’s inclusion of transfer patients likely led to a higher-acuity patient population. Additionally, general medical, technological, and other secular advances over the course of the study cannot be ignored, as healthcare does not occur in a vacuum. Many advances in Rwanda’s healthcare system have occurred in the last several years as previously noted, and the EMTP’s impact cannot be isolated due to the observational nature of this study.However, it is worth noting from the results that changes to patient outcomes in the ED setting outperformed those same outcomes in the in-hospital setting over the same course of years, minimizing the role that technological advances played in improving outcomes. Lastly, the inclusion of patients with life-threatening injuries who also have fractures had the potential to confound results. Future research might exclude patients who require operative intervention for indications external to musculoskeletal trauma.Short-term outcomes – including return emergency department visits – after discharge from the ED are used as internal quality metrics, as short-term revisits might represent medical errors or failures in care.Although interventions to reduce return visits have largely been unsuccessful,it is possible that these efforts did not adequately target high-risk patients.The ability to accurately identify which patients are more likely to revisit the ED could improve treatment and disposition decisions, and also allow EDs and health systems to develop more focused interventions. Previous work has identified some predictors of return visits,although these studies are limited by investigating only a subset of patients, restriction to one or few sites,focus on non-U.S. hospitals,reliance on complicated instruments,focus on medical errors,focus on admissions, or use of overly-broad definition of discharge failure.We used a unique dataset with encounter-level data to evaluate the predictors of return visits. Our goal was to identify which patient demographics and medical conditions were most associated with short-term revisits.

In addition, we hypothesized that frequency of recent previous visits – specifically, number of visits within the previous six months – would have a stronger association with return visits than other patient characteristics , and that this pattern would be observed even after controlling for hospital and community characteristics. Data were recorded in the medical record at each hospital. Vituity collects this data through monthly electronic data feeds by its medical billing company, Med America Billing Systems, Inc, which stores records in Application System / 400 and PostgreSQL. Patient visits were linked through Medical Person Identification number – a unique patient identifier derived by an algorithm taking into consideration patient name, date of birth, Social Security number, and address. This methodology allowed for linkage across sites, although visits at non-Vituity sites were not observable. Any visit had the potential to be defined as an index visit. Patient characteristics included age, sex, insurance type , and the number of ED visits they had in the six months prior to the index visit. We reduced previous ED visits to an indicator variable for two or more previous visits in order to identify a characteristic that was easily observed and easy to apply to patients in real time. Visit characteristics included acuity level, primary diagnosis, and Charlson comorbidity index. Primary diagnoses were categorized using International Classification of Diseases, 9th and 10th revisions Clinical Categorization Software categories. These categories were developed and defined by the Healthcare Cost and Utilization Project , under the AHRQ, and this scheme has been used in a number of studies.Because of the large number of categories, we further restricted diagnoses to the diagnoses that had at least 10,000 observations and were associated with 14-day revisits in bivariate analysis; among these, we included the five most common diagnoses for index visits and for revisits. Charlson comorbidity index was calculated for all visits based on up to 12 separate ICD codes per visit .Hospital characteristics included size , and turnaround time to discharge for 2015. TAT-D is a quality metric measuring the median time between patient arrival and discharge at the hospital level for a given year. Volume was broken into four categories as defined by the Centers for Medicare & Medicaid Services: fewer than 20,000 encounters = low volume; 20,000 – 39,999 encounters = medium volume; 40,000 – 59,999 encounters = high volume; and greater than or equal to 60,000 encounters = very high volume. Community characteristics were comprised of zip code and county-level characteristics: median household income for zip code, number of hospitals per 1000 population in the county, and county.We excluded from the study providers working for the firm for fewer than 60 days within the study period or accounting for fewer than 60 encounters. To test whether there was a different likelihood in return visit according to acuity level, we included interaction terms between MD/DO and acuity level; given the difference in scope of practice for APP, interactions between APP and acuity level were not modeled. Over the study period, there were 8,334,885 index encounters. After excluding visits resulting in a disposition other than discharge and excluding visits with missing data, the total sample size was 6,699,717 . Table 1 shows the patient, visit, hospital, and physician characteristics at index visit for all encounters, and stratified by discharge vs admission. These descriptive statistics are also shown for encounters resulting in a 14-day return and for those who returned and were admitted to the hospital. In the multivariate model including patient, hospital, and community characteristics , the highest predictor of return visit within 14 days was whether or not the patient had two or more visits in the previous six months: OR = 3.06 .

Caulkins and colleagues show that conventional sentencing is significantly more cost effective

Even putting aside the questionable pharmacological and moral aspects of this differential policy, there is no evidence whatsoever for its effectiveness in controlling crime.Although the crack mandatory sentences were trimmed somewhat in 2007, and the Supreme Court recently acted to restore some judicial discretion in these cases . Whether these changes will translate into a closing of the large racial differential remains to be seen. The optimal level of drug law enforcement is surely well above zero, but just as surely, well below current levels . Caulkins and Reuter argue that we could reduce the drug prisoner population by half without harmful consequences; they note that this would still leave us with system “a lot tougher than the Reagan administration ever was.” Kleiman suggests tactics for getting more mileage out of less punishment through the use of small, quick sanctions, strategically deployed. In 2005, there were about 1.8 million people in substance abuse treatment in the US, about 40 percent for alcohol, 17 percent for the opiates, 14 percent for cocaine, and 16 percent for marijuana . There are certainly many thousands of people who need treatment and are not receiving it. Whether expanding the available treatment capacity would bring them in is an open question. We should be wary of assuming that a purely “public health” approach to drugs can work; the police and courts play a crucial role in bringing people into treatment – increasingly so with the expansion of drug courts and initiatives like California’s Proposition 36,hydroponic cannabis system the 2001 law which permits treatment in lieu of incarceration for those convicted for the first or second time for nonviolent drug possession .

For most primary drugs of abuse, criminal justice referrals are a major basis for treatment: in 2005, 57 percent of marijuana treatment, 49 percent of methamphetamine, and 27 percent of smoked cocaine. But 36 percent of clients in alcohol treatment were referred by the criminal justice system, so legal status may not be the crucial lever. In a sophisticated cost-effectiveness analysis, Rydell and Everingham estimate that the U.S. could reduce cocaine consumption by 1 percent by investing $34 million in additional treatment funds, considerably cheaper than achieving the same outcome with domestic drug law enforcement , interdiction , or source country controls . But because treatment effects are usually estimated using pre-post change scores that are vulnerable to two potential biases . First, the post treatment reduction could reflect a simple “regression to the mean” in which an unusually extreme period of binge use would be followed by a return to the user’s more typical levels, even in the absence of treatment. Second, treatment pre- and post tests are vulnerable to selection biases because clients who enter and remain in treatment until post-treatment measurement are a non-random and perhaps very unrepresentative sample of all users. Regression artifacts would inflate treatment estimates; selection biases could either inflate or deflate the estimates. We believe that the full weight of the evidence makes it clear that treatment is both effective and cost-effective, but until these problems are better addressed, we cannot be sure that the benefits of expanded treatment would be as large as Rydell and Everingham implied. Even its most passionate advocates recognize that treatment’s benefits are often fleeting. About three quarters of heroin clients and half of cocaine clients have had one or more prior treatment episodes . Forty to sixty percent of all clients will eventually relapse, though relapse rates are at least as high for hypertension and asthma treatment .

Importantly, Rydell and Everingham recognized that treatment can provide considerable health and public safety benefits even if it only reduces drug use while the client is enrolled. Held up to a standard of pure prevalence reduction , treatment is unimpressive. But by the standards of quantity reduction and harm reduction, treatment looks pretty good. American providers – steeped in the Twelve Step tradition – recoil at the phrase “harm reduction” – but it is a service that they can and often do perform quite well. Perhaps the most socially beneficial treatment modality is one that some are reluctant to view as treatment at all – methadone maintenance for heroin addicts. In 2006, there were 254,049 people receiving methadone, only about 20 to 25 percent of all opiate addicts in the US . The gap is partly due to spotty service provision outside major cities, but in even urban centers, many addicts won’t voluntarily seek out methadone, preferring heroin even with its attendant risks. But Switzerland, the Netherlands, and Germany have amassed an impressive body of evidence that hard-core addicts significantly improve their health and reduce their criminality when they are able to obtain heroin directly from government clinics . Similar ideas were rejected in the US several decades ago, but perhaps it is time for a second look . In the US, the dominant form of prevention takes place in the classroom, generally administered by teachers . Ironically, prevention is the least well funded but most thoroughly tested drug intervention. Drug prevention has very modest effects on drug and alcohol use; e.g., the mean effect size in the most recent comprehensive meta-analysis was about 1/20th of a standard deviation . Considering that 1/5th of a standard deviation is usually considered the benchmark “small” effect size, this is not very encouraging. Making matters worse, the single most popular program, Drug Abuse Resistance Education , accounts for nearly a third of all school prevention programs , but numerous studies show it has little or no detectable effect on drug use . It is not clear whether its ineffectiveness stems from its curriculum or from its reliance on classroom visits by police officers.

But classroom based prevention is quite inexpensive, so it doesn’t have to be very effective to be cost-effective. Caulkins and colleagues estimate over $800 in social benefits from an average student’s participation, for a cost of only $150. Most of the benefits involve tobacco prevention, then cocaine, and only minimally marijuana. Classroom-based prevention materials can’t be effective if the messages aren’t salient in real-world settings where drug taking opportunities occur. But a well-funded campaign of magazine, radio, and television ads by the Office of National Drug Control Policyc appears to have had no positive impact on levels of use . We should be wary of thinking we have evaluated “the impact of mass media”; it may just be that the messages we’ve been using aren’t very helpful. Note that our prevention messages are almost exclusively aimed at prevalence reduction rather than quantity reduction or harm reduction . A greater emphasis on secondary prevention and harm reduction might have real payoffs with respect to social costs,hydroponics system for cannabis but we won’t know unless we try . Evidence from classroom sex education is instructive in this regard; programs that teach safe sex are reliably more effective at reducing risky behavior than are abstinence-based programs . We can hazard some guesses about where American drug policy might head in the future. The medical marijuana movement is likely to diminish in visibility as sprays like Sativex reduce the role of marijuana buyers’ clubs, yet adult support for marijuana legalization will continue to increase as the tumultuous “generation gap” of the 1960s becomes a distant memory. Methamphetamine will soon peak, if it hasn’t already , leaving us to deal with a costly aging cohort of addicts, much like our earlier heroin epidemic. And vaccines against nicotine and cocaine addiction may soon hit the market, with both desirable and unintended consequences . But rather than developing the case supporting these speculations, we close with two trends that are already well underway, each of which has the potential to seriously subvert current cultural assumptions about drugs and drug control. The conventional wisdom is that ecstasy is a “love drug” or “empathogen,” and that it is the drug of choice for European and Asian American college students and young professionals.

But there are many reports of increased ecstasy use by minorities living in several cities . Many observers have noted its prevalence in the “hyphy” movement and the associated rap music . There is evidence of an increase in the number of references to ecstasy use in hip-hop music starting in 1996 . The reported rise in ecstasy use in the hip-hop scene has ignited alarming claims that ecstasy is “the new crack” ; a CBS television story asked whether Ecstasy was a “hug drug or thug drug” . In fact, researchers have only begun to examine the diffusion of ecstasy into inner-city neighborhoods . There is laboratory evidence of heightened aggression in the week following MDMA ingestion , but in a 2001 study of arrestees, ecstasy use was not associated with race, and negatively associated with arrest for violent crimes . It is also unclear whether self reported “ecstasy” use always involves MDMA, as opposed to closely related drugs like methamphetamine . Thus the emerging “thizzle” scene does raise intriguing questions about psychopharmacology, culture, and their intersection, but whether there is any meaningful causal connection between Ecstasy, race, and crime is far from certain. Earlier, we offered a thought experiment about a hypothetical drug called Rhapsadol. We now ask the reader to consider a newly created synthetic stimulant, “Quikaine.” Quikaine targets the neural system by increasing the speed of ion transfer between synaptic gaps. Thus, it reduces reaction time and increases the speed with which physical tasks can be accomplished. It in no way alters the user’s emotional state either during or after the drug in is the system. Neither does it affect intellectual functioning. Second, consider “Intellimine.” Its sole impact on the human body is to improve cognitive capacity; it has no other emotional or physical impact, and no lingering effect on mental functioning once the drug leaves the system. In addition, because variants of this drug have been used for decades to help with ADHD/ADD and Alzheimer’s it has a long and empirically sound safety record. In fact, children and the elderly receive maximum benefit of the drug. How should we regulate these drugs? Should they be legally available for purchase by adults? If not, are there more limited circumstances in which their use might acceptable? For example, would Quikane’s use be warranted by those charged with protecting others from danger, such as certain military operatives or police officers? What about for completing tasks faster and more safely, such as on an assembly line? How about for simply reducing the amount of time spent on household chores? Should we allow surgeons, crisis managers, and other high-stakes problem solvers to take Intellimine? These drugs are hypothetical, but new synthetics already have some of their properties, and there is every reason to expect rapid advances in the development of performance enhancers in the near future . They will raise vexing questions about personhood, agency, freedom, and virtue. For centuries, we have associated psychoactive substances with the pursuit of purely personal goals: fun, seduction, escape, transcendence, ecstasy. New drugs like Intellimine and Quikane will force us to come to grips with a radically new framing: Drug use as a tool for enhanced economic competitiveness. Parents who now worry about how marijuana might jeopardize their children’s Ivy League prospects may soon worry about whether abstinence lowers SAT scores. Employers who now screen urine for marijuana may come to view abstainers as slackers. It will be fascinating to see how we learn to reconcile these new pressures with our traditional attitudes toward drugs. We close with a brief list of topics that are sorely in need of research attention. Rather than a long wish list, we confine our attention to priorities that are implied by our analytical framework; specifically, the argument that quantity reduction and harm reduction deserve a more equal footing with prevalence reduction. The first priority is to give far greater attention to the development of quantity and harm indicators in epidemiological research. Our national drug surveys devote far more attention to prevalence than to dosage, settings of use, or consequences of use, and the reliance on household and classroom populations over represents casual users and under represents the heaviest users .

Participants gave written informed consent after a complete description of the study

We report data from interviews conducted in 2012–2014 with a population-representative 1994–1995 birth cohort of over 2000 British young people transitioning out of compulsory schooling and into early adulthood. An examination of how NEET youths appraise their own economic abilities and prospects is currently lacking. Societies tend to view NEET youth in a largely negative light, but little is known about how these young people see themselves. Understanding their self-perceived economic potential may clarify what factors present the best targets for intervention and support among NEET youth, as well as for the larger population of young people who are trying to find their path forward in life. The transition to young adulthood also coincides with the age of peak prevalence of psychiatric disorder, and young people on the margins of society are known to be at risk for mental ill-health . It is thus crucial to understand whether NEET youths experience more than their share of mental health problems and substance abuse, and whether knowledge of their mental health histories can inform the services provided to them during this vulnerable period. Here, we investigated how NEET status is related to self-reported commitment to work, job-search behaviour, skills and economic optimism. We also tested the hypothesis that NEET youth would have elevated rates of mental health and substance abuse difficulties. Our aim was not to establish the causal direction of any link between NEET status and mental health . Rather, we think that the descriptive data here provide valuable, and otherwise scarce,cannabis grow table insight into the lives of these young people, helping provide a needed evidence base for service provision and policy making.Participants were members of the Environmental Risk study, which tracks the development of a birth cohort of 2,232 British children.

The sample was drawn from a register of twins born in England and Wales in 1994–1995 . Full details about the sample are reported elsewhere . Briefly, the sample was constructed in 1999–2000, when 1,116 families with 5-year-old twins participated in home-visit assessments. The sample includes 55% monozygotic and 45% dizygotic same-sex twin pairs . Families were recruited to represent the UK population of families with newborns in the 1990s, on the basis of residential location throughout England and Wales and mother’s age. Teenage mothers were over selected to replace high-risk families selectively lost to the register through non-response. Older mothers having twins via assisted reproduction were under selected to avoid an excess of well-educated older mothers.Ethical approval was granted by the Joint South London and Maudsley and the Institute of Psychiatry NHS Ethics Committee . At follow up, the study sample represents the full range of socioeconomic conditions in the UK, as reflected in the families’ distribution on a neighbourhood-level socioeconomic index . ACORN uses census and other survey-based geodemographic discriminators to classify enumeration districts into socioeconomic groups ranging from ‘wealthy achievers’ with high incomes, large single-family houses and access to many amenities, to ‘hard-pressed’ neighbourhoods dominated by government-subsidized housing estates, low incomes, high unemployment and single parents. ACORN classifications were geocoded to match the location of each E-Risk study family’s home. E-Risk families’ ACORN distribution closely matches that of households nation-wide: 25.9% of E-Risk families live in ‘wealthy achiever’ neighbourhoods compared to 25.3% nation-wide; 5.3% versus 11.6% live in ‘urban prosperity’ neighbourhoods; 29.4% versus 26.9% live in ‘comfortably off’ neighbourhoods; 13.5% versus 13.9% live in ‘moderate means’ neighbourhoods; and 26.0% versus 20.7% live in ‘hard-pressed’ neighbourhoods. E-Risk under-represents ‘Urban Prosperity’ because such households are significantly more likely to be childless.

Follow-up home visits took place when study participants were aged 7 , 10 , 12 , and, most recently in 2012–2014, at 18 years . At the time of data collection, age 18 is when most young people in the United Kingdom would have completed compulsory schooling and attained legal adulthood. At age 18, E-Risk participants who did not participate in the study did not differ from those who did on initial age-5 measures of family socioeconomic status, IQ scores , or internalizing or externalizing behaviour problems . Home visits at ages 5, 7, 10 and 12 years included assessments with participants and primary caretakers; the visit at age 18 included interviews only with participants. Each twin was assessed by a different interviewer.We considered attention-deficit/ hyperactivity disorder or conduct disorder to be present if the child had met criteria for these disorders at any of the age – 5, 7, 10 and 12 E-Risk assessments, because these disorders onset and become common during this childhood age period. As previously described , at each assessment age, ADHD and conduct disorder were ascertained on the basis of teacher and mother reports of symptoms according to DSM-IV. Symptoms were reported for the preceding 6 months. Symptom endorsement was based on teachers’ responses to a rating scale of symptoms in a mailed questionnaire, and, for parental reports, on their responses in a face-to face standardized interview. By age 12, the children had grown old enough to ascertain depression, anxiety, and substance use, which tend to onset and become common as children enter adolescence. Children were interviewed with the 10-item Multidimensional Anxiety Scale for Children and the Children’s Depression Inventory . Children scoring at or above the 95th centile on the MASC were categorized as having clinically significant anxiety . Based on validation studies, a total score of ≥20 on the CDI was used to identify children with clinical depression . Children were considered to engage in harmful substance use if they reported that they had tried drinking alcohol or smoking cigarettes on more than two occasions, or had tried cannabis, taken pills to get high, or sniffed glue/gas on at least one occasion. Lastly, we assessed childhood/adolescent suicidal behaviour using measures from the age-12 and age-18 phases of the study. At age 12, participants’ mothers were asked whether each child had ever deliberately harmed him/herself or attempted suicide in the previous 6 months .

Mothers’ descriptions of the event were later coded by an independent rater. At the age-18 interview, participants were interviewed about suicide attempts occurring between ages 12 and 18, using a life calendar. We used a 5-year reporting period for this behaviour because suicide attempt is a rare event. Interviewers differentiated between suicide attempts and non-suicidal self-harm; for this analysis we focus on incidents accompanied by self-reported intent to die. The age-12 and age-18 reports were combined into one dichotomous variable indicating whether the participant had engaged in any suicidal behaviour between ages 12 and 18.All analyses controlled for participants’ childhood socioeconomic context. Family SES was defined using a standardized composite of parents’ income, education and social class ascertained at childhood phases of the study,cannabis drying trays which loaded significantly onto one latent factor . Neighbourhood-level socioeconomic index was defined using the ACORN classification as described above. A clinical question is whether work-related self-perceptions and concurrent mental health problems continue to be associated with NEET status once measures of early-life ability are taken into account. For these analyses, we additionally controlled for participants’ childhood intelligence and reading skill. Intelligence was individually tested at age 5 using a short form of the Wechsler Preschool and Primary Scale of Intelligence-Revised comprising Vocabulary and Block Design subtests. IQs were prorated following procedures described by Sattler . Reading skill was individually tested at age 10 using the Test of Word Reading Efficiency , which measures children’s ability to recognize whole words, to pronounce them quickly and accurately, and to sound out unfamiliar words . Raw scores were standardized and grouped into ranked categories following procedures described by Torgesen et al. .Eighteen-year-old NEETs had higher rates of all concurrent diagnosed psychiatric and substance disorders compared to their peers, and they were significantly more likely to smoke . This ‘snapshot in time’ suggests that NEET youths are, on average, burdened to an excess degree by mental health and substance use problems. In addition to concurrent mental health problems, we observed that NEET youth also tended to have had mental health problems earlier in life, prior to confronting the difficult transition into the labour force. Table 4 shows that 18-year-old NEET participants were, as children, more likely than their peers to have exhibited high levels of depression and to have been diagnosed with ADHD or conduct disorder. They were also more likely to have engaged in substance use and self-harm behaviour as young adolescents. These associations persisted after controlling for confounding sociodemographic variables. In total, more than half of NEET youths had already experienced a serious mental health problem by early adolescence . We further examined whether the associations between NEET status and age-18 mental health problems were entirely attributable to pre-existing mental health vulnerabilities. Table 3, Model B shows that while the associations between NEET status and age-18 mental health problems were slightly reduced in magnitude after controlling for childhood mental health problems, they remained large and statistically significant in nearly all cases.

These results suggest that even after accounting for prior vulnerability to poor mental health, as well as for childhood social class and ability , NEET youths were at high risk for serious disorder.We evaluated sex differences by using interaction terms to assess whether the associations in Tables 2–4 varied between male and female E-Risk study members. Sex-specific results were very similar. The exception was diagnosis of generalize danxiety disorder at age 18: Only male NEET youths were at significantly higher risk for this mental health problem , although the association for female NEET youths trended towards significance .This study suggests that the majority of contemporary 18-year-old NEET youths are endeavouring to find jobs and are committed to the idea of work. However, they feel hampered by their low skill levels and are discouraged about their future economic prospects. Compared to their peers, NEET youths are also contending with substantial mental health problems, including depression, anxiety, substance abuse and aggression control. Many of these youths already exhibited such mental health problems in childhood, years before attempting to transition into the labour market. However, childhood psychological vulnerabilities do not fully explain the concurrent association between NEET status and poor mental health; nor do concurrent mental health problems explain the association between NEET status and work-related self-perceptions. Group differences in social class, IQ and reading ability also did not account for NEET youths’ worse self-perceptions and mental health. This glimpse into the lives of NEETs indicates that while NEET is clearly an economic and mental health issue, it does not appear to be a motivational issue. The goal of this report was not to infer causal relations between NEET status and mental health problems. Indeed, there is extensive evidence for reciprocal influence, including recent studies showing that childhood mental health problems precede and may lead to vulnerability to becoming NEET . We think that NEET status and mental health problems often co-occur in young people while they make the transition from school to work because the stress of wanting to work, but being unable to, can be harmful to mental health ,employers tend to preferentially hire applicants who seem healthier, especially when jobs are scarce and early manifestations of serious mental illness can include disengagement from education and employment . Similarly, there may be reciprocal influences between NEET status and self-perceptions if pessimism and lacking skills lead to being unemployed, while being unemployed fosters pessimism and prevents opportunities to master new skills. Moreover, we recognize that levels of opportunity for employment rise and fall in conjunction with national economic circumstances, and are not caused by the circumstances of individuals. This makes our findings particularly relevant for current unemployment-related policy efforts, as the NEET youths in our study are part of the ‘lost generation’struggling to enter the labour force during the worst economic climate in decades. The objective of our report was to draw attention to the lives of NEET youths and their mental health needs. Our results suggest that these needs take three primary forms. First, NEET participants’ self-perception that they lack skills is probably accurate. More young people should be trained in professional/ technical and ‘soft’ skills, which may also enhance optimism. Second, reducing NEET youths’ depression, anxiety and substance abuse problems by providing them with mental health services may enable them to more effectively cope with challenges, develop confidence in their abilities, and take better advantage of training and back-to-work opportunities . Third, it will be critical to identify and provide enhanced educational guidance to young adolescents with mental health problems .