Monthly Archives: February 2023

Such interstate sharing and tracking of all scheduled drugs has shown to provide safer patient care

With limited time and no prior patient relationship, emergency physicians must make quick decisions balancing the provision of sufficient pain management against the potential for abuse and/or misuse. It is sometimes difficult to detect who might be seeking to misuse opioids. In one study, “classic” drug-seeking behaviors were relatively ineffective in identifying high-risk patients.PDMPs have been available for nearly 80 years. The first state PDMP was established in California , followed by Hawaii , Illinois , Idaho , New York , Pennsylvania , Rhode Island , Texas , and Michigan.Early PDMPs were paper-based, record keeping systems used primarily to provide reports to law enforcement.By 1990 electronic key-punch databases enabled easier data dissemination via PDMPs, and pharmacists and clinicians began to use them.In 1996 the pharmaceutical OxyContinTM was introduced; simultaneously, illicit prescription drugs doubled from 1994-1998.In response, Congress signed the Harold Rogers Prescription Monitoring Program grant into law in 2002, providing the first guidelines and funding for states to develop PDMPs.Since then, 49/50 states have now adopted PDMPs.Since the inception of PDMPs, studies have assessed their impact on opioid prescribing and overdoses.Overall, the literature has been mixed. Some studies have found no relationship between PDMP implementation and outcomes; however, most studies evaluated paper- or faxed-based systems, with physicians receiving information days to weeks after the initial request.In one such study, 21% of the PDMPs evaluated were within their first years of operation or had only recently come online.This is significant because in states with new PDMPs,many physicians were not accessing or using data.

This point is exemplified through Virginia’s PDMP, which was established in 2007 and was initially paper based.After moving to electronic and real-time reporting, data requests exponentially increased from 74,342 in 2009 to 433,450 in 2010.Another factor limiting the effectiveness of PDMPs is that each state has different policies and requirements for providers to use them. Few states mandate prescriber use and in those states that do not mandate use,heavy duty propagation trays compliance varies greatly.In this context, it is logical to assume that if prescribers do not access PDMP data, they cannot be effective.A clear factor that leads to variation in observed effectiveness is that PDMPs are not all designed in the same way, particularly when it comes to their accessibility. Most are representative of separate and distinct technological and political environments at the time of their creation.According to the National Alliance for Model State Drug Laws study in late 2012, 38 state PDMPs are operated by a state health agency and six are operated under the aegis of law enforcement agencies.Additionally, 45 states monitor schedule II-IV controlled substances, 34 states grant authority to monitor schedule V substances, and 13 allow additional monitoring of drugs not listed on Drug Enforcement Administration schedules. Moreover, while several states require physicians to access patient PDMP data before prescribing controlled substances, the majority of states allow for voluntary participation among physicians.Finally, 25 states provide unsolicited, automatic reports of suspicious activity directly to law enforcement but only three automatically send reports to multiple facilities including law enforcement, licensing boards, pharmacies, and prescribers.However, states have looked to update and reformat their PDMPs to better address the opioid crisis. For example, with funding from the Core State Violence and Injury Prevention Program, Oregon reformatted its PDMP to provide more appropriate data to its EPs.

Under this new funding, PDMPs were designed to track schedule II-VI drugs prescribed within Oregon as well as providing physicians with access to the PDMP data of bordering states.Furthermore, pharmacies within Oregon were required to report prescription data within 72 hours of opioid dispensing.Since implementation, Oregon has reported a 38% decrease in the rate of prescription opioid overdose as well as a 58% reduction in deaths related to methadone use.As sharing hubs such as those in Oregon, Michigan, Indiana and Ohio are established, EPs may be better equipped to successfully identify drug-seeking behavior. PDMP design also leads to great variation in usability. For example, some information displayed is not always relevant or organized in a way that allows for EPs to answer specific clinical questions that fit into ED workflow. In some systems, frequent extraneous information is obtained simultaneously.Excessive data forces providers to search for relevant information, squandering valuable time. Furthermore, clinician training on how to use and interpret PDMPs is often limited. Users are often left wading through mountains of patient data seeking to piece together a complete picture. One study surveyed physicians and nurses from diverse specialties after PDMP use and found that practitioners lacked guidance on data interpretation.In EDs time is a valuable resource and, unfortunately, the complexity of some PDMPs limits their usability. For example, in the current structure, PDMPs have experienced growing compliance issues secondary to their difficulty of use. In certain states, physicians are required to register with their PDMP via a notarized medical license and government identification.Password protocols exacerbate issues with PDMP accessibility. Often physicians are required to meet excessive requirements for password security only to find that within a short time their password has expired and the process starts over. Passwords often cannot match previous entries and involve multiple erroneous key elements to meet required fields.

Working in a fast-paced ED, having to frequently create and update complicated passwords quickly transforms these safeguards into a barrier to use. Finally, not all PDMPs track all schedule drugs. Schedule II drugs such as opioids have largely been the focus of PDMPs, but other drugs categorized in alternative schedules also have the potential for abuse. In 2011, for example, ED visits for benzodiazepine abuse, a schedule IV drug, was nearly equal to visits for opioids.ED records have demonstrated a strong association with benzodiazepine abuse and opioid abuse.Despite this potential for additional abuse, only 34 PDMPs monitor schedule II-V drugs.To address many of the usability limitations with PDMPs in EDs,vertical cannabis we suggest several ways to optimize their implementation.Unsolicited reporting is a powerful tool through which PDMPs can automatically send alerts for drug-seeking behavior meeting a specific threshold to the appropriate authority. Such quantitative thresholds have already been implemented in several states with some success. In Virginia, thresholds for individuals receiving 10 prescriptions from 10 providers or within a six-month period were used.Subsequent periodic analysis of the data for automatic, unsolicited reporting showed a steady decrease in the number of individuals meeting thresholds, correlating to a decrease in likely diversion and abuse.Such automated reporting does come with risk as such policies raise concerns about patients being labeled an addict or postponing necessary treatment.In addition, physicians treating cancer patients or those requiring long-term opioid management have expressed concern for their reputation and licensure.However, in the context of the newly-approved National Quality Forum measures for limiting opioid prescribing, PDMPs can take such measures into consideration and would likely have an inverse effect by ensuring that guidelines are followed and patients are treated safely.Data analytics and data visualization may be ways to help contextualize opioid prescriptions for the busy EP. For example, by linking prescriptions to a particular diagnosis physicians may greatly reduce the guesswork involved for prescription behavior. At a glance, a patient with multiple prescriptions for both short and long-acting, opioid pain medications may appear to be an opioid abuser. However, by tethering an explanatory diagnosis to such prescriptions, after investigation this patient could be found to have an extensive chronic pain condition warranting multiple prescriptions. Therefore, fewer mental resources may be required to rule out opioid abuse, reducing the potential for misinterpreting data and in turn provide quicker and better informed emergency care. Painful conditions make up 42% of all emergency department visits.With the increasing focus on analgesia by The Joint Commission’s pain management standards and emphasis on analgesia in patient satisfaction surveys, it is no surprise that medical use of opioids and opioid analgesic prescriptions has been increasing since the early 1990s.Unfortunately, there has also been an increase in prescription opioid abuse and misuse, with a rise in opioid related events including increases in opioid-related ED visits, inpatient hospitalizations, and opioid overdose deaths.Unintentional overdose has now surpassed motor vehicle collisions as the leading cause of injury and death in the United States for adults aged 25–64 years, and the majority of all unintentional poisonings are related to opioids.Not surprisingly, opioid prescribing has come under increasing scrutiny in recent years including in the ED.

The American College of Emergency Physicians Clinical Policy – Critical Issues in the Prescribing of Opioids for Adult Patients in the Emergency Department – states, “Although relieving pain and reducing suffering are primary emergency physician responsibilities, there is a concurrent duty to limit the personal and societal harm that can result from prescription drug misuse and abuse.”While the percentage of U.S. ED visits with opioids prescribed increased from 20.8% to 31.0% between 2001-2010, studies have shown that the majority of opioid prescriptions from the ED are a low pill count and are almost exclusively immediate-release formulations and significantly less likely to be high dose or consist of a large quantity compared to those from office-based practices.Regardless, with a recent study showing that opioid-naive ED patients prescribed opioids for acute pain are at increased risk for additional opioid use at one year, the ED is an important site for the study of opioid-prescribing patterns.Adding to this body of literature, a recent article in the New England Journal of Medicine showed that opioid prescribing habits vary widely between providers in the same ED and patients who receive treatment of “high-intensity” opioid prescribers had higher rates of long-term opioid use.Recent interventions for decreasing opioid prescribing have focused on prescription drug monitoring programs and creation of opioid-prescribing guidelines.Opioid prescribing guidelines have been shown to reduce rates of opioids prescribed for both minor and chronic complaints in acute care settings.Most recommendations on safe opioid prescribing for the ED recommend a maximum of three- to five-day courses of opioid medications.With the increasing prevalence of electronic medical records and electronic order-entry systems has come an increasing interest in the ability to standardize clinical workflows in an effort to reduce medication-related errors.To date, no study has assessed the effect of default tablet quantities as part of electronic order entry on emergency physicians’ prescribing patterns. Our primary objective was to evaluate opioid-prescribing patterns before and after removal of the default quantity of 20 tablets for opioid prescriptions in the EMR.Our primary objective was to evaluate opioid-prescribing patterns after removal of default quantity of 20 tablets in the EMR. When the default quantity was in place, the majority of prescriptions provided were for this exact quantity , suggesting that prescribing behavior is strongly influenced by a default quantity prepopulated in the EMR. After removing the default, the number of prescriptions provided for this quantity decreased, and the median number of tablets prescribed with each prescription had a statistically significant reduction. Removing the default quantity requires that physicians choose the number of tablets they will prescribe. Had our primary objective been to achieve a more significant reduction in quantity provided, we could have changed to a smaller default value of 10 or 12 tablets and evaluated the impact of this change. However, the increased variation of tablet quantity prescriptions observed after removal of the default may reflect more appropriate prescribing patterns—a smaller quantity of analgesia tablets needed for less severe pain or pain expected to resolve quickly and greater quantities for more severe pain or pain expected to be prolonged. In many clinical scenarios it may be beneficial to avoid variation among practicing clinicians in a single ED, such as in the treatment of an acute myocardial infarction or sepsis. Having a default opioid quantity in the EMR, while demonstrated to successfully reduce variation in clinical practice patterns, may not be optimal for patient care. This would reflect a case where variation of prescribing patterns may be more appropriate than standardization. A “one-size-fits-all” approach to opioid prescribing and ignoring variable durations and severities of acute pain syndromes will predictably result in undertreatment for some patients and over treatment for others. The total number of prescription for opioids was also noted to have decreased during the study period. In the pre-intervention period, there were 151.6 prescriptions for opioids per 1,000 discharged adult patients compared to 106.69 per 1,000 in the post-intervention period.

The more effective systems have taken a multi-modal approach

Despite these concerns, there is often a misunderstanding about the net clinical benefit associated with oral anticoagulation in the elderly.Physicians often cite perceived bleeding risk as a primary reason for withholding anticoagulation for AF/FL patients, a finding we also observed.Physicians overestimate the risk of intracranial bleeding in patients with high risk for falls. However, there is evidence that patients with AF would need to fall repeatedly throughout the year before the risk of intracranial hemorrhage would outweigh the net benefits of stroke-prevention from anticoagulation.Of interest, a patient’s predicted hemorrhage risk, as measured by the HAS-BLED score, was associated with less anticoagulation prescribing in our population but was of borderline statistical significance. The evidence suggests that a high-risk HAS-BLED score per se is not a reason to withhold anticoagulation that is otherwise indicated.In most patients with elevated bleeding risks, the magnitude of gain from stroke reduction far outweighs the small risk of serious bleeding.Bleeding risk scores are best used to identify patients in need of closer follow-up, particularly to address reversible risk factors such as uncontrolled hypertension, concomitant use of non-steroidal anti-inflammatory medications, and excess alcohol.Our study found that several rhythm-related characteristics were strongly associated with the likelihood of receiving oral anticoagulation at or shortly after an ED visit for AF/FL. For example, we noted that patients who reverted to sinus rhythm before ED discharge were less likely to receive a prescription for anticoagulation. As indicated from the reasons documented for withholding anticoagulation, EPs significantly varied their estimation of a patient’s ischemic stroke risk based on the persistence of AF/FL during the ED stay.

Lower rates of anticoagulation also have been seen in patients with paroxysmal AF in other practice settings.Most recently,cannabis equipment an analysis of the American College of Cardiology PINNACLE Registry found that patients with paroxysmal AF considered at a moderate to high risk of ischemic stroke were less likely to be prescribed oral anticoagulant therapy and more likely to be prescribed less effective or no therapy for thromboembolism prevention than those with non-paroxysmal AF.Compared with patients with persistent or permanent AF, those with paroxysmal AF have less frequent and less prolonged episodes of AF , which correlates with a lower incidence of thromboembolism.Yet the reduction in stroke risk is not sufficient to lessen the need for thromboprophylaxis.Importantly, consensus-based clinical practice guidelines do not vary their recommendations for thromboprophylaxis based on type of AF, nor do validated stroke risk scores alter their prognosis based on paroxysmal or non-paroxysmal rhythm.We also found that physicians were less likely to initiate anticoagulation in patients with a history of prior AF/FL and in those whose atrial dysrhythmia was thought by the EP to be chronic and unremitting. This might seem counter intuitive given our finding that patients who left the ED still in AF/FL were more likely to receive thromboprophylaxis. It is possible, however, that ED patients at high risk for stroke with known recurrent or chronic AF/FL had already been advised about anticoagulation options before their index ED visit and previously declined or discontinued anticoagulation in the distant past. Some have attributed this behavior to “clinical inertia,” the hesitancy of physicians to alter the current pattern of care initiated by other providers.Nonetheless, further exploration is needed to clarify the underlying reasons for these observations. With today’s expanded pharmacopeia for AF/FL stroke prevention, patients who had declined or discontinued warfarin in the past may be open to consider a direct oral anticoagulant, given the several patient-oriented advantages of this class of medications.

One novel finding of our study is that EPs were more likely to initiate anticoagulation when consulting cardiology. The reason for this may be multifactorial. Certain patients may have a clinical profile that leads to both cardiology consultation and thromboprophylaxis, or perhaps EPs who consult cardiology are more apt to initiate anticoagulation independent of the consultation. The more likely reason, however, is that cardiologists asked to advise on any facet of ED AF/FL care may raise the question of stroke risk and recommend thromboprophylaxis when indicated.Others have shown that cardiology involvement in the outpatient setting improves rates of stroke prevention treatment in AF patients. The TREAT-AF study found that outpatient cardiology care compared with primary care was associated with higher rates of anticoagulation of AF patients.Anticoagulation rates increase even when a primary care provider referred their AF/FL patients to see a cardiologist but maintained patient oversight themselves.The benefits of multi-specialty collaboration were seen not just during the patients’ ED stay. Of those who were prescribed oral anticoagulation in this study, more than one quarter were given thromboprophylaxis in the outpatient setting, either in the primary care or cardiology clinics. The importance of post-ED follow-up for AF/FL patients at high risk for thromboembolism is also seen by the number of EPs and patients in our study who deferred the anticoagulation decision to allow a fuller discussion of thromboprophylaxis with an outpatient provider. Deferring the initiation of anticoagulation in high-risk ED patients, however, may not be without risk. In some settings, a significant proportion of AF patients discharged home from the ED failed to achieve outpatient follow-up in the subsequent 90 days, regardless of insurance status.Moreover, compared with patients who leave the ED with an anticoagulant prescription in hand,vertical grow shelf those who wait to consult an outpatient provider about stroke prevention have been shown to have a significantly lower frequency of long-term anticoagulation use and a significant delay in initiation among those eventually treated.

When referring patients to outpatient providers for this critical decision, the EP can facilitate anticoagulation initiation by several means: introducing stroke prevention to their AF/ FL patients and beginning the educational and shared decision-making process; including stroke prevention material in the patient’s discharge instructions; recommending a timely follow-up appointment; and notifying the outpatient provider that stroke prevention may be indicated and that patient education was begun prior to ED discharge.Our results highlight opportunities for improvement in care. Patients seeking emergency care for their AF/FL may be more open to health-promoting behavioral changes, as has been observed with other medical conditions.Initiating stroke-prevention therapy at the time of ED discharge has been shown to be safe and associated with a mortality reduction.Not all EPs, however, see it as their role to initiate anticoagulation when indicated for AF/FL patients.Nevertheless, EPs can still play a key role in promoting stroke prevention by risk-stratifying their AF/FL patients, broaching the topic with high-risk patients, adding personalized stroke risk educational material to the discharge instructions, and encouraging high-risk patients to continue the shared decision making conversation about thromboprophylaxis with their outpatient provider. The results of this study raise questions about other ways to increase evidence-based anticoagulation. We identified certain physician misunderstandings that, if corrected, could increase anticoagulation of stroke-prone patients with AF/FL. Physician education should emphasize that patients with AF/ FL at high risk for thromboembolism warrant stroke prevention even if their rhythm type is paroxysmal.Also, antiplatelet agents do not provide sufficient protection against ischemic stroke in patients with high-risk AF/FL, though this is commonly believed.We observed that about one in eight high-risk AF/FL patients were given or continued on aspirin instead of oral anticoagulation, a high percentage, but lower than that found in a large cardiology clinic-based population of AF patients at moderate to high risk of stroke.Unfortunately, we were not able to distinguish when aspirin was advised as though it were sufficient stroke prevention from cases where the patient refused anticoagulation and was recommended aspirin instead. Recent U.S. guidelines suggest a very limited role for aspirin in selected AF/FL patients ;3 data supporting the use of aspirin monotherapy in patients at high risk of stroke are poor, and there are reports that it may even increase the risk of ischemic stroke in elderly patients.Aspirin is also not safer than oral anticoagulation in patients over 80 years of age with regard to serious bleeding.Recent guidelines recommend that aspirin monotherapy should not be used as stroke prevention in AF/FL with the exception of patients who refuse any form of oral anticoagulation and cannot tolerate a combination of aspirin and clopidogrel.Though education about these misunderstandings will be vital, education alone may ultimately have little impact on changing physician behavior.Several academic medical centers have improved oral anticoagulation rates in stroke prone AF patients by referring them to an accessible outpatient AF clinic.Another recommended approach is the provision of electronic clinical decision support to help physicians in their care for AF/FL patients.

To facilitate AF/FL thromboprophylaxis, such a system could calculate a patient’s predicted stroke and bleeding risk scores simultaneously at the point of care and provide patient-specific recommendations for treatment. Results of various clinical decision support systems have been mixed.The Anticoagulant Programme East London, for example, showed improvement in appropriate anticoagulation of outpatients with AF by a combined program of education around agreed-upon guidelines with computer aids to facilitate decision-making as well as patient-specific review and feedback of locally identifiable results.Some clinical researchers are sharing their electronic clinical decision support tools for AF stroke prevention with patients and have found that mobile health technology improved patient knowledge, drug adherence, anticoagulant satisfaction, and quality of life.Electronic clinical decision support tools have had success in the ED setting when combined with a strong promotional program and could be readily adapted for use in patients with AF/FL.A multidisciplinary team at the University of British Columbia designed such an electronic clinical care pathway for ED patients with uncomplicated AF/FL.The pathway included a care map, decision aids, medication orders, management suggestions, and electronic consultation or referral documents, embedded in the computerized physician order entry and integrated electronic health record. Implementation was preceded and accompanied by a standardized educational and promotional program. The pathway increased the incidence of anticoagulation initiation on discharge for high-risk patients by 20.6 percentage points.This study had several limitations. The study sample did not include all identified AF/FL patients; however, patient characteristics were highly similar between those who were and were not enrolled, so the impact of potential selection bias is likely limited. Our prospective data collection tool was designed to evaluate a wide range of care-related issues for AF/FL and was not focused on thromboprophylaxis , but we cannot rule out the potential for a Hawthorne effect during the study period. The sample size was modest, which accounts for limited precision for certain associations, and we cannot rule out missing associations of smaller magnitude that may still be clinically relevant. We did not prospectively capture each patient’s relative contraindications to anticoagulation or their treatment preferences, which are two of the leading reasons physicians deviate from guideline recommendations for stroke prevention therapy.This enlarged our denominator of anticoagulant-eligible patients and lowered our percentage of anticoagulant prescribing. We were able to identify some of these contraindications during our retrospective chart review, but these variables were incompletely documented. This study focused on stroke prevention using warfarin, the only oral anticoagulant on the formulary in our health system until early 2014. Even with the recent availability of direct oral anticoagulants, physicians in our health system continue to initiate warfarin for AF/FL thromboprophylaxis: 40% of new oral anticoagulant prescriptions during the first quarter of 2017 for non-valvular AF/FL across all 21 medical centers were for warfarin. Warfarin continues to be widely used for stroke prevention across North America, Europe, and around the world.In fact, the European Society of Cardiology says it is reasonable to continue warfarin therapy in AF patients with a reassuring time in therapeutic range.It is unclear whether the availability of newer agents will substantively alter physician overestimation of bleeding risk in older patients or underestimation of long-term stroke risk in patients with paroxysmal AF. Additional research will be needed to evaluate whether practice patterns of stroke prevention in AF/FL patients will change with use of direct oral anticoagulants. Studies suggest, however, that sub-optimal AF thromboprophylaxis persists despite the availability of direct oral anticoagulants. 90 Lastly, our study was conducted in a large integrated healthcare delivery system in California among insured patients who, on ED discharge, can receive close monitoring by our pharmacy-led Outpatient Anticoagulation Service and timely follow-up with their primary care providers. These integrated services may influence ED prescribing practices and may not be readily available to patients and providers in other healthcare systems. These distinctions of care may limit the generalizability of our results to other geographic locations and practice settings.

The trained reviewer then reviewed the EMR for all patients sampled during the QI effort

A pre-clinical medical student was trained as a reviewer by an emergency medicine attending who was a QI officer with experience in chart review.We reviewed each patient’s index ED note for the following: age at time of ED visit; gender; migraine history; known history of significant intracranial pathology; whether brain imaging was performed during the index ED visit; and findings from the brain imaging if performed. We reviewed all ED, neurology, neurosurgery, and primary care clinic notes as well as any HCT or brain magnetic resonance imaging results occurring in the 22.5-month period following each index ED visit. This length of time was selected because it was the maximum window available from the last visit in the dataset at the time that data collection began. Follow-up data included the following: whether a follow up visit took place for a similar headache; diagnoses assigned at follow-up visits; date of follow-up visit; the service providing follow-up care; whether death was recorded in our EMR; whether brain imaging was performed in the follow up period; and findings from brain imaging if performed. We distinguished between follow-up for any reason and those related to the ED visit as a marker for sample retention during the follow-up window. Data were entered into a standardized data collection spreadsheet. Prior to data collection we defined all terms in the spreadsheet in a data dictionary. No adjustment was made for trainee involvement or subsequent shift changes at the time of the index ED visit. A priori we defined potential missed diagnosis as the presence of any of the following conditions being found after the index ED visit: aneurysm involving the intracranial or cervical vessels; hydrocephalus; intracranial hypertension; stroke ; intracranial mass; subarachnoid hemorrhage; subdural hemorrhage; epidural hemorrhage; intraparenchymal hemorrhage; or dural sinus thrombosis.

To determine which subsequently-identified intracranial conditions should be counted as missed diagnoses we employed a board-certified EP to perform an independent review of all records where subsequent intracranial conditions were identified. This reviewer was blinded to the study hypothesis and had not been involved in or measured by the initial QI project. For each potential missed diagnosis, the independent reviewer reviewed the index ED visit note,indoor garden table follow-up visit notes and radiology reports before assigning a determination of whether the subsequently-diagnosed cranial condition could have potentially been diagnosed at the index visit. We labeled these as missed intracranial diagnoses.During our QI effort we did not observe a decrease in HCT after a year of educational interventions, but we observed a 9.6% decrease after providers reviewed their own data. This accords with the Institute of Medicine suggestion that feeding providers’ data back to them may be an important part of effectively changing physician behavior.It is worth noting that during our QI effort we never explicitly instructed providers to decrease HCT ordering. This was motivated by the assumption that our doctors were already trying to do the right thing and avoid unnecessary testing, but that doctors might be capable of being more diligent in diagnosis assignment. The decrease in HCT ordering that we observed came after providers reviewed their own data. So this decrease appears to have resulted from a change that providers took upon themselves after being given the opportunity to look at objective data of their practice patterns and to reflect on what this data told them about their own practice. Happily, this would seem to support our initial assumption that doctors are generally trying to do the right thing. Previous studies have found that CT pulmonary angiography for evaluation of pulmonary embolism could be safely decreased, thereby decreasing resource utilization without causing harm to patients.20,21 These studies used probabilistic decision models or looked at inpatient charges, limiting their generalizability to ED patients.

The most compelling evidence supporting the safety and cost effectiveness of decreasing CTPA in ED patients had median hospital stays of 7.7 days and medical charges of $6,281.This was in contrast to the typical patient presenting to the ED with headache, where reduced testing may mean no testing. We found that a reduction in HCT use for the evaluation of ED patients with headache was not followed by increased death or missed diagnoses. However, the observations that those patients who returned for reevaluation of the same complaint and those who subsequently received brain imaging were more likely to have not had HCT during index visit calls the true impact of decreasing ED-based testing on overall resource utilization into question.It may be the case that many patients simply feel that they need some sort of test to have had a thorough evaluation. This is supported by studies finding that ED patients who do not receive CT imaging for headache or for abdominal pain were more likely to return within 30 days.A previous study has observed up to three-fold variability in the proportion of HCT use for the evaluation of atraumatic headache in the ED.In our study we observed a convergence between EPs’ HCT-ordering proportions when we compared the pre-intervention to the post-data review phases; however, because our study was not designed or powered to investigate this, our observation is only suggestive. This study has several limitations. As a retrospective chart review, we only had access to information contained in the EMR. Patients who did not follow up with us may have had death or missed diagnoses that we did not observe. In the pre post study design, however, these factors are likely distributed across time periods, so we do not expect that this study type biased our findings. Approximately 86% of the sample had a subsequent visit within our EMR, suggesting that access to care was good and that the probability of patients seeking care outside our health system was low. Though we cannot exclude other causes of HCT reduction over time, there were no co existing initiatives in place in the study institution to change HCT ordering practices.

Since we do not practice in a closed medical system, patients could have presented to other systems for care or could have died without presenting to our hospital. To address this issue, we limited the outcomes assessment to patients who received primary care within our university health system by excluding patients who were transferred in,microgreens grow rack improving the probability that we would capture events. Because of neurosurgical coverage in our predominantly rural state, nearly all patients in our region with significant intracranial pathology would be transferred to our institution for care; therefore, it is unlikely that such outcomes were not captured. This is supported by the observation that over 85% of patients in this study had another encounter in our health system within 22.5 months of the index visit. The use of an outcome that did not account for the clinical conditions, comorbidities, or appropriateness of initial CT ordering limits the applicability of our findings. However, this type of metric was drafted as part of the proposed quality measure; so interpreting our CT ordering practices in this context parallels the outcomes that might be expected if this metric were more widely adopted. In this way, our study is pragmatic and reflects the limitations of case identification and administrative data use. CMS OP-15 was found to be unreliable, in part because it relied upon administrative data.We addressed this issue by relying on chart review, the gold standard against which the aforementioned study compared OP-15. This resulted in a more reliable measure but at the cost of a highly labor intensive technique.Burnout, depression, and suicidality among residents across all specialties have become a critical focus of attention for the medical education community. Prevalence studies have revealed rates of burnout among residents to be as high as 76%, as measured by the Maslach Burnout Inventory.1 Residents suffering from burnout have a higher risk than their peers of developing depression, anxiety, and substance-abuse problems.Even more alarmingly, up to 9.4% of fourth-year medical students and interns reported having suicidal thoughts.These numbers are borne out in the estimated 400 physicians who commit suicide each year.In response to these findings, the Accreditation Council for Graduate Medical Education approved major changes to the Common Program Requirements to begin in July 2017. In section VI.C, residency programs are mandated to educate residents and faculty members in the identification of burnout, depression, and substance abuse and to implement curricula that encourage their optimal well being.However, the ACGME has yet to provide residency programs with concrete guidelines for the creation of wellness curricula to adequately address this mandate. Many residency programs have already implemented some form of wellness training for their residents. Unfortunately, evidence supporting the efficacy of these interventions is sparse and often limited to single institutions and small sample sizes.

Nor has the medical education community reached an agreement on the best method of identifying relevant and high-impact wellness topics for residents, or understanding the optimal method for delivery and dissemination of information. From October 3, 2016, to May 14, 2017, members of the Wellness Think Tank communicated through a shared online platform to discuss the strengths and weaknesses of the wellness programs at their respective training sites. The Think Tank is a virtual community of practice, hosted by a medical education organization Academic Life in Emergency Medicine, which is comprised of 142 emergency medicine residents from 100 different training programs in North America. Multiple residents noted a haphazard and ineffective approach to teaching wellness topics, which they attributed primarily to a lack of shared knowledge between residency programs. Residents voiced a clear need for more widely shared lesson plans that focus on the development of practical skills fostering personal wellness. In preparation for the 2017 Resident Wellness Consensus Summit in-person event on May 15, 2017, an Educator Toolkit working group was created. Using a group consensus process, the residents of the Wellness Think Tank selected and agreed upon three high-yield topics that would benefit educators as an evidence-based, robust resource: second victim syndrome , mindfulness and meditation, and positive psychology. Each toolkit was designed using Kern’s six-step model of curriculum design, active teaching techniques, and accountability to increase engagement. Representing 20 different programs, 8, 16, and 17 residents participated in the development of the SVS, mindfulness and meditation, and positive psychology educator tool kits, respectively. Two faculty members trained in educational theory, one with a master’s degree in medical education, provided oversight. Twenty-two resident members of the Wellness Think Tank and 22 additional residents attended the live RWCS event. These residents represented 31 EM residency programs located in three different countries. Five faculty members facilitated the event. Members of the Educator Toolkit working group presented their drafts to the RWCS consensus group for review. There, participants discussed the proposed topics, learning objectives, and teaching techniques for each of the three topics. Following the RWCS, each educator toolkit was further refined based on the feedback, which resulted in the final versions presented here. A phenomenon growing in national awareness,8–10 SVS is commonly defined as feelings of guilt, inadequacy, or incompetence following an unexpected, negative patient outcome. Commonly manifesting as anxiety, depression or shame, it often goes unrecognized. It is likely that most healthcare providers experience symptoms of SVS at least once in their careers and the emotional “wear-and-tear” may contribute to burnout, the decision to depart from clinical medicine, or even suicide.Victims of SVS may require assistance from mentors, colleagues, or mental health professionals to cope with the frequently intense, negative personal and professional ramifications of the experience.Awareness of the existence of SVS is critical for residents and faculty so that they may develop strategies to mitigate the negative effects in both themselves and their colleagues. Despite its relevance across specialties, and especially EM, no published residency curricula discuss SVS. Our toolkit aims to fill that education gap to ensure that residents are prepared for the emotional and professional toll from inevitable negative patient outcomes that will occur during their careers. To maximize learner engagement and provide flexibility for residency programs, this toolkit includes four “mini modules” using a flipped-classroom approach. Each module consists of a pre-reading assignment followed by a 20-minute group discussion.

Our predictive model shows excellent agreement between observed and predicted probabilities

The incidence of SEA has progressively increased over the past several decades;it has more than tripled over the past decade at our institution and with this, the risk of serious neurologic morbidity has risen commensurately.Although early recognition is essential to preventing irreversible neurologic deficits, diagnostic delays are common.Perhaps the best opportunity for early diagnosis occurs at the time of initial clinical presentation. The majority of patients with SEA present to a healthcare facility with clinical manifestations more than once within 30 days of diagnosis; in most cases, these visits are to the ED.Therefore, a clinical scoring tool that could reliably discriminate potential cases and stratify them for emergent spinal imaging at the time of initial clinical presentation would be of value to optimize clinical outcomes in SEA, while also providing stewardship and prioritization of imaging resources. Diabetes mellitus and other, chronic, co-morbid illnesses are often listed as predisposing risks for SEA in the extant literature.Recently reported data from our institution, representing the largest, single-site, published SEA case series with an attendant control group, failed to confirm these as risk factors.Although further refinement of the model may improve these classification characteristics,indoor plant table maximizing sensitivity and specificity may not be optimal for clinical application. Because of the potentially severe consequences of delaying the diagnosis of SEA, it may be more clinically exigent to maximize sensitivity and thereby sacrifice some degree of specificity.

Using our scoring tool with a cut point at six, sensitivity appeared to be optimized at the expense of a modest decrease in specificity. At a higher cut point of seven, sensitivity – the ability of the model to correctly detect positives – was reduced to 77%. At this cut point the PPV was 93% , but the NPV was only 67%. At a lower cut point of five, the modest increase in sensitivity was associated with substantial decrement in specificity that was felt to be too low for clinical utility. A previous study evaluated a clinical decision guideline based on elevated serum inflammatory markers to determine the need for advanced spinal imaging in patients potentially at risk for SEA in the ED13 Although use of the guideline at one institution appeared to reduce diagnostic delays as compared with historical controls, detailed information on the use of MRI was not provided. Additionally, the guideline relies on laboratory testing, which could introduce further delays. We sought to develop a clinically relevant model that was based exclusively on epidemiologic and clinical features that are apparent on initial clinical presentation in order to appropriately triage MR spinal imaging and to reliably facilitate the early recognition of SEA as distinct from other potential spinal pathologies. Our model has several limitations. The data were drawn from a retrospective, 10-year cohort; thus, the model is based on clinical variables collected at the time of admission or shortly thereafter and available in the record. Although we strove to collect complete data on all variables, it is possible that some potentially useful factors were not considered. Additionally, to optimize the clinical utility of our model, we purposefully limited it to clinical data that would be apparent on initial presentation. We did not consider serum inflammatory markers or other laboratory data in this category. Review of erythrocyte sedimentation rate levels in our cohort revealed that this marker was only obtained in approximately 60% of the patients, and the vast majority of these were cases, suggesting that ESR was requested only in the clinical presentations that were highly suspicious for SEA.

Another important limitation of our model is that it was derived from data from a decade-long, retrospective cohort of patients with a confirmed SEA case prevalence of 65% from a single institution; thus, the clinical presentations in this cohort raised at least some level of suspicion for the diagnosis. This scoring model may therefore only be relevant when SEA is reasonably suspected. Our work underscores the known importance of clinical judgment in suspecting the diagnosis of SEA;1,3 the objective model detailed herein serves to complement this subjective consideration. Because our institution is a regional, tertiary-care, academic medical center, it is also important to determine whether our data can be extrapolated to other care settings. A prospective evaluation and validation of this model is needed to understand whether it may be useful in an unselected sample of patients presenting for medical attention with a constellation of symptoms and/or signs potentially warranting investigation for SEA. Such an evaluation may also determine if the model can be substantially improved by incorporating additional data that could be ascertained within a short time frame after ED presentation.Patients diagnosed with sexually transmitted infections are common in the emergency department setting. The Centers for Disease Control and Prevention estimates that nearly 20 million new STIs occur annually.Patients undergoing evaluation for potential STIs will often have had comprehensive evaluation that includes gonococcal and chlamydia testing, wet prep, urinalysis, and urine culture. The clinical presentations for STIs and urinary tract infections may overlap, and symptoms of dysuria and urinary frequency/ urgency occur with both STIs and UTIs.Abnormal urinalysis findings of leukocyte esterase and pyuria are common in both UTIs and STIs.STIs have been previously found to be associated with pyuria without bacteriuria. Furthermore, high STI rates have been reported in women evaluated in an urban ED and diagnosed with UTI.Emergency physicians must make decisions as to whether to empirically treat for UTIs based on initial UA results alone because confirmatory urine culture results are not readily available for several days after the patient’s ED visit.

Findings of significant UA pyuria on these patients have the potential to lead EPs to treat the patient for a presumed “UTI” in patients who may actually have STIs and negative urine cultures.Additionally, nitrite-positive dipsticks have previously shown high specificity for UTIs,but this has not been studied specifically in STI-positive patients. Positive urine cultures have been defined by previous studies as growth of a bacterial pathogen >100,000 colonies.Sterile pyuria is classified as the presence of more than 5-8 leukocytes per high-power field on microscopy,plant growing stand in the setting of negative urine cultures.Treating a patient with sterile pyuria for a UTI can have negative effects, including antibiotic resistance and unnecessary cost to the patient.7 Antibiotic resistance and limited antibiotic selections are a worldwide public health concern. The patient taking an unnecessary antibiotic can have potential adverse effects, such as allergic reaction, anaphylaxis, or secondary, antibiotic-associated infection such as C.difficile.Antibiotic stewardship has become a responsibility for healthcare institutions and antibiotic prescribers, and recently a new standard of Joint Commission Requirements.The CDC identified that 20-50% of all antibiotics prescribed in U.S. acute care hospitals are either unnecessary or inappropriate.Not treating a UTI, on the other hand, can lead to pyelonephritis or even sepsis.This poses a dilemma for EPs trying to best treat these patients. Previous studies in ED settings have demonstrated over diagnosis of UTIs and under-diagnosis of STIs.However, prior studies have not specifically evaluated the incidence of sterile pyuria in patients with confirmed STIs. For EPs to provide their patients with optimal empiric antibiotic therapy, it can be helpful to identify whether patients with confirmed STIs commonly have associated culture-positive UTIs. The purpose of this study was to determine the frequency of sterile pyuria in patients with confirmed STIs seen in a community hospital ED. In addition, we examined the urine cultures of STI-positive patients who were prescribed an antibiotic for presumed UTI, and determined how many of those patients actually required antibiotics for positive urine cultures. We hypothesized that STI-confirmed patients who have pyuria on initial urinalysis would have a high prevalence of sterile pyuria, as the urinalysis results were likely contaminated. We also hypothesized that prescribing UTI antibiotics for patients with suspected STI is unnecessary, and that the majority of these patients will have negative urine cultures. Previous studies have found that women with urinary symptoms are over-diagnosed with UTI and under-diagnosed with STIs,but no prior research has specifically analyzed urine results of known STI-positive patients. In this retrospective review of women testing positive for Neisseria gonorrhoeae, Chlamydia trachomatis, and/or Trichomonas vaginalis over a five-year period at a large metropolitan ED, we found that of the cases with pyuria, 74% of those were sterile pyuria.

Our study found a very low overall incidence of positive urine cultures in the setting of women with positive STIs. Of the patients with pyuria, patients with culture-positive urines vs. culture-negative urines had identical ranges of urine leukocytes , but the mean leukocytes were higher in the culture-positive group. Prior literature indicates that in the general population the urine-dipstick, nitrite reaction has a low sensitivity but a very high specificity, making a positive result useful in confirming the diagnosis of UTI caused by organisms capable of converting nitrates to nitrite such as Escherichia coli.However, the urine-dipstick test for nitrites has not been studied in STI-positive patients. We found that in the setting of positive STI cases, positive nitrite on the urine dipstick is not a good indication of UTI. Our results showed that in STI positive cases, nitrite-positive urines were actually 18% more likely to be associated with negative urine cultures. Current scientific literature emphasizes the need to reduce the use of inappropriate antimicrobials in all healthcare settings due primarily to antimicrobial resistance, but also because of the associated costs and potential adverse effects.Our study found that of the 295 patients with confirmed STIs who were also prescribed an antibiotic for a presumed UTI, 66% of those were unnecessary, as they had negative urine cultures.Paramedics embedded with SWAT teams are trained to coordinate with team movements within the hot zone, providing medical support for the team as it progresses. Conversely, the current paradigm is that EMS personnel can be trained to enter the warm zone to conduct rescue operations when escorted by law enforcement. However, paramedics familiar with the RTF model are neither equipped nor trained sufficiently to provide care while under a direct threat.While these skill sets overlap they are not synonymous, and medical directors must not assume tactical paramedics integrated with the law enforcement SWAT will provide a sufficient medical resource for an RTF model. The contrast between SWAT paramedics and RTF paramedics was highlighted in two ways. First, as the event unfolded, it became evident that responding fire and EMS units were not accustomed to combined operations with law enforcement. Their corresponding equipment packages and communications networks were different from those of the law enforcement responders. Furthermore, while clearly identified as an “active shooter” event by the first patrol units, the initial setup closely followed that of a mass casualty incident. The tactical command post was established to the north and the casualty collection area/treatment to the south. It is estimated that the south location was possibly within the blast radius of the IEDs left in the building. If this estimation was correct, by definition it means that the triage area was established in the hot zone and not on the warm/cold border as is traditionally taught. Regardless, in the presence of a dynamic threat it may become necessary to ensure traffic control to and create a perimeter for the treatment area. Secondly, SWAT medics do not carry complete Advanced Life Support equipment due to their operational mandate for mobility. While they are often paramedics or physicians, their role as a SWAT medic is to provide medical aid only when operationally appropriate because their primary mission is to ensure the effectiveness of the law enforcement team. [The caveat is that a member of the public will receive priority because the duty of law enforcement is to ensure the safety and well being of citizens.] Although a SWAT medic may enter deep within the hot zone with their tactical element, he or she does not carry equipment sufficient to provide sustained care for a large number of casualties in that zone. The support for ongoing evacuation care must come from follow-up resources, such as those provided by the RTF medical elements.Finally, within the current milieu of civilian, public, mass-shooting incidents, the latest data on civilian wounding patterns do not fit the prototype of the exsanguinating extremity injury, and thus are not amenable to the hemorrhage control techniques mastered by the tactical medic such as the use of tourniquets.

It has been well established that ED frequent users increase healthcare costs and contribute to ED and hospital crowding

While previous studies have documented the prevalence and ramifications of food insecurity in the ED,the availability of a provider-driven order to improve this condition has not been previously documented. Although food resources are available through a variety of federal and state programs, healthcare providers may be unaware of how to successfully connect patients with these programs.Additionally, the details of the different programs, and understanding which programs apply to whom, can be unclear to patients and healthcare providers alike. Therefore, we believe that referring those patients in need to partners such as Second Harvest Heartland will likely be of greatest benefit to the patients, as these partners focus on one-on-one application assistance and navigation of programs, rather than simply handing out brochures or blank applications in the ED. It is not surprising that using an EMR referral tool improved access to food services in our patient population; the benefit of EMR communication for connecting patients to numerous types of medical and social services has been well documented in the literature.We did, however, identify certain issues with this referral process that are unique to food referrals and unique to the ED. For example, in contrast to the clinic setting where demographics and contact information is updated prior to patient evaluation, in the ED this information is frequently incomplete early in the patient’s visit. If the EMR order was placed without accurate contact information,hydroponic stands the information provided to Second Harvest Heartland was also incomplete. In the early stages of the ED referral, this led to a disproportionate number of ED referrals lacking the necessary contact information and thus these patients could not be reached.

After identifying this problem, the EMR order was changed, requiring the provider to enter an address, phone number, mobile number, or email address, ensuring proper communication to the food bank for follow-up. Ongoing, focused education was valuable in ensuring this aspect of the order was completed for successful referrals. Another important consideration identified during the implementation of this process was realizing the knowledge gaps regarding food insecurity in our ED. Screening for food insecurity is not standardized at intake, nor is it part of the registration/rooming process. As such, in faculty and resident discussions surrounding use of the order, failure to consider food security as part of the ED assessment of patients was perceived to be a key limiting factor in making the food referral. Second Harvest Heartland began systematically visiting clinics and educating staff directly regarding the EMR order; while this helped increase referral volume in the clinics, the ED was targeted later in the roll out. We believe that this highlights the importance of provider education in the ED, as this patient population is at great risk for food insecurity and their needs may not be identified if they are not screened or if they do not use the clinic system. Even with education, concerted efforts and ongoing education are necessary.Frequent users of the emergency department represent a complex group of patients who overuse ED resources. This group accounts for as many as 28% of all ED visits, with the number of annual visits by this group continuing to rise.Frequent users of the ED are defined as patients making four or Community Hospital of the Monterey Peninsula, Division of Emergency Medicine, Monterey, California more ED visits per year; however, some “ultra”-frequent users may make 20 or more visits per year.While the reasons underlying frequent ED visits are often complex and may represent failure of the healthcare system to provide for patients with complex needs, ED frequent users incur significant charges and time for treatment and testing as a part of their evaluation and treatment.

Additionally, as a part of each ED visit, evaluation, and treatment, patients spend time occupying EDs bed and using hospital services such as phlebotomy and radiology.ED bed time and hospital resources are a valuable commodity, particularly as ED visits continue to rise nationwide, making the reduction of such resources by ED frequent users a desirable goal. Case management, as defined by the Case Management Society of America, is a “collaborative process of assessment, planning, facilitation, care coordination, evaluation, and advocacy for options and services to meet an individual’s and family’s comprehensive health needs through communication and available resources to promote quality, cost-effective outcomes.”Given the complex medical and social needs of ED frequent users, case management has been extensively used in this group of patients, with multiple studies showing successful reducing in the use of ED services and cost of care in the ED.A 2017 systematic review identified 31 different studies of interventions to decrease ED visits by frequent users.However, despite the large number of studies published, there has been little research on the effect of ED case management for frequent users on length of stay , either in the ED or in the inpatient setting. To the best of our knowledge, this is the first study to evaluate the effect of case management on ED, inpatient, and total hospital LOS for all types of visits by ED frequent users. The goal of this investigation was to explore the effect of ED case management in frequent users of the ED on LOS, both in the ED and the inpatient setting. To better understand the impact of case management in this population,grow table we also chose to look at the effect of this intervention on ED and hospital charges as well as utilization of hospital services. We hypothesized that ED case management would reduce ED visits, admissions, ED LOS, inpatient LOS, charges, and diagnostic studies.We conducted this study at a 225-bed hospital in a suburban area, with approximately 56,000 ED visits per year.

The surrounding healthcare community consists of a variable mix of county-run primary care clinics and private practice physicians – in both primary care and specialty care. There are few free clinics in the surrounding area. Two other hospitals are within 30 miles of our institution, one of which is a county hospital. The study consisted of a retrospective chart review of ED and inpatients visits by patients in our hospital’s Emergency Department Recurrent Visitor Program , comparing the visits made in the one year prior to enrollment in the program, to the visits made in the one year after enrollment in the program.The EDRVP is run by an ED social worker or registered nurse , with emergency physicians, social workers, ED RNs, chemical dependency providers, behavioral health RNs, case managers, and representatives from local insurance providers. At monthly meetings, members of the EDRVP discuss approximately 10 patients who have been referred to the program. If a care plan does not appear to be working to address frequent ED visits or a new issue has come up for the patient causing recurrence of heavy ED use, the patient’s case and care plan is re-visited at the next meeting. If a truly urgent or emergent issue arises, the staff will correspond via secure email or in person to address it and develop new care plans or revisions to existing care plans. The program was developed initially in 2006 by ED staff at our hospital to address increasing visits by frequent users. As the program has grown, additional hospital staff and services have been recruited to assist us with the growing number of patients requiring case management, and to meet newly identified needs of patients in the program. For inclusion criteria, patients are referred to the program for any of the following reasons: concerning ED use ; 10 or more ED visits in 12 months; six or more ED visits in six months; four or more ED visits in one month; or activity by a patient that demonstrates a propensity for future problematic ED encounters – such as violence in the ED or prescription forgery. Patients exhibiting such high-risk activity were believed to be potentially problematic patients, and therefore a plan was developed to preempt frequent, potentially dangerous, recurrent, and problematic visits. There are no exclusion criteria, and patients of any age may be referred. Once a patient has been referred for enrollment in the program, his or her visits are reviewed to determine the underlying medical, psychiatric, and social issues causing the multiple ED visits. A plan of care for the patient is then developed, with the intent to address these issues in the outpatient setting. Care plans may include referring the patient for a case manager, referring the patient to a needed specialist, assisting the patient with unstable housing, or requiring that patients only receive medications from their primary doctor – rather than coming to the ED for refills.

We studied all patients enrolled in the EDRVP between October 2013 and June 2015. For each patient, we reviewed all ED and inpatient visits for the one-year time period before they were enrolled as well as the one- year time period after they were enrolled. Visits were reviewed using the hospital’s electronic medical records system, Sunrise Clinical Manager. We recorded the number of each of the following parameters for the year before and year after enrollment: number of ED visits; number of inpatient admissions; ED LOS; inpatient LOS; ED charges; inpatient charges; number of computed tomography scans; number of ultrasounds; number of radiographs, and number of ED visits at which blood work was performed. Additionally, we noted six main reasons why patients were referred to the program: needing pain management; complex psychosocial issues; complex medical conditions; psychiatric illness; substance abuse; and needing resources or referrals.Six chart reviewers reviewed all of the visits and recorded the data using a standardized data collection spreadsheet in Microsoft Excel. The lead author supervised the chart reviewers to ensure that data collection was standardized and accurate between them. After data collection was complete, we proceeded with data analysis. As we wanted to determine the effect of ED case management on the study parameters listed above, we compared each of the parameters for each patient from the one-year time period before enrollment in the program to the one-year time period after enrollment in the program. To evaluate for statistical significance, we then used a paired Wilcoxon signed-rank test, comparing the year before enrollment to the year after enrollment.Our study clearly demonstrates that ED case management reduces utilization of services, LOS, and cost in a population of ED frequent users. Clearly in the current U.S. healthcare environment, which is characterized by expensive care and crowded hospitals and EDs, this is critical information and may provide some ideas to develop solutions to the problems of high cost and crowding. In reviewing the data on the reason for referrals to the program, it is apparent that this group of patients has complex needs, with less than a third of the group being referred to the program to address only one issue. This supports the need for a comprehensive case management program like the one we have instituted, as we believe that addressing only a single issue underlying recurrent ED use may not decrease ED utilization. From an ED administration standpoint, the most compelling piece of data appears to be the effect of ED case management on LOS. EDs across the U.S. struggle with crowding, often with critically ill or injured patients being forced to wait in waiting rooms when no beds are available. Our study showed that ED case management for ED frequent users helps this problem in two ways. First, by reducing ED visits and ED LOS, the program directly decreases the amount of ED bed time occupied by these repeat visitors, freeing up beds for patients in the waiting room. Second, by reducing inpatient LOS, ED patients are more likely to have inpatient beds available when needed, reducing the frequency of ED boarding. With less ED boarding, there is more available bed time in the ED for new patients from the waiting room. This increased ability to place new patients from the waiting room allows for new patients to be roomed much more quickly, allowing for critically ill and injured patients to receive time-sensitive treatment more quickly and reducing the door-to doctor time for all patients in the department.

Recent work suggested that predictions may improve by incorporating functional genomic information

We therefore hypothesize that genetic variation at CADM2 may underlie a latent personality trait or risk factor that predisposes individuals to engage in risky actions. Despite the success of GWAS of alcohol use the mechanisms by which these newly identified genetic associations exert their effects are largely unknown. More importantly, alcohol consumption and misuse appear to have distinct genetic architectures. Ever-larger studies, particularly those extending mere alcohol consumption phenotypes, are required to find the genetic variants that contribute towards the transition from normative alcohol use to misuse, and development of AUD.One successful application of GWAS has been their use for assigning polygenic risk scores , which provide estimates of an individual’s genetic risk of developing a given disorder. Reassuringly, PRS for alcohol use behaviors predict equivalent phenotypes in independent cohorts [e.g. alcohol consumption , AD , AUD symptoms ]. Johnson et al recently identified that, compared to PRS for alcohol consumption , PRS for alcohol misuse were superior predictors of a range of alcohol-related phenotypes, particularly those pertaining to the domains of misuse and dependence. These findings further illustrate that alcohol consumption alone may not be a good proxy for AUD. PRS can also be used to test specific hypotheses; for example, PRS can be used to measure how environmental, demographic,cannabis growing system and genetic factors interact with one another. Are there developmental windows where the effects of alcohol use and misuse are more invasive?

Can we identify biomarkers that would inform the transition from normative alcohol use to excessive use and dependence? For instance, the alcohol metabolizing genetic effects on alcohol use appeared to be more influential in later years of college than in earlier years , revealing that the nature and magnitude of genetic effects vary across development. It is worth noting important limitations of PRS analyses. First, polygenic prediction is influenced by the ancestry of the population studied. For example, PRS for AUD generated in an African American cohort explained more of the variance in AUD than PRS derived from a much larger cohort of European Americans. This illustrates that the prediction from one population to another does not perform well. Second, the method of ascertainment may bias the results. As an example, PRS for DSM-IV AD derived from a population based sample predicted increased risk for AD in other population samples but did not associate with AUD symptoms in a clinically ascertained sample. Third, the variance explained by PRS is still low, and hence PRS have limited clinical application. For example, in the largest study of alcohol consumption , the alcohol consumption PRS accounted for only ~2.5% of the variance in alcohol use in two independent datasets.For example, McCartney et al showed that, compared to conventional PRS, risk scores that took into account DNA methylation were better predictors of alcohol consumption. Nonetheless, the way in which such methods can be used for prevention or treatments of AUD has yet to be established. Lastly, it remains to be determined the nature of these associations. Mendelian randomization analyses can serve to further understand and explore the correlations between alcohol use behaviors and comorbid traits.

Before the era of large-scale genomic research, twin and family-based studies identified a high degree of genetic overlap between the genetic risk for AUD and psychopathology by modeling correlations among family members. With the recent development of linkage disequilibrium score regression , it is now possible to estimate the genetic correlations between specific alcohol use behaviors and a plethora of psychiatric, health and educational outcomes using GWAS summary statistics. Most notably, the genetic overlap between alcohol consumption and AD was positive but relatively modest , suggesting that, although the use of alcohol is necessary to develop AD, some of the genetic liability is specific to either levels of consumption or AD.Another consistent finding from genetic correlation analyses has been that alcohol consumption and AUD show distinct patterns of genetic overlap with disease traits. Counter intuitively, alcohol consumption tends to correlate with desirable attributes including educational attainment and is negatively genetically correlated with coronary heart disease, type 2 diabetes and BMI. These genetic correlations are unlike those observed when analyzing alcohol dependent individuals: AD was negatively genetically correlated with educational attainment and positively genetically correlated with other psychiatric diseases, including major depressive disorder , bipolar disorder, schizophrenia and attention deficit/hyperactivity disorder. Importantly, alcohol consumption and misuse measured in the same population showed distinct patterns of genetic association with psychopathology and health outcomes. This set of findings emphasize the importance of deep phenotyping and demonstrates that alcohol consumption and problematic drinking have distinct genetic influences. Ascertainment bias may explain some of the paradoxical genetic correlations associated with alcohol consumption. Population based cohorts, such as UKB and 23andMe, are based on voluntary participation and tend to attract individuals with higher education levels and socioeconomic status than the general population and, crucially, lower levels of problem drinking. In contrast, ascertainment in the PGC and MVP cohorts was based on DSM-IV AD diagnosis and ICD codes for AUD, respectively.

Collider bias has been proposed to underlie some of the genetic correlations between alcohol consumption and BMI ; however, BMI has been consistently negatively correlated with alcohol use in several subsequent studies. Furthermore, it is also possible that the genetic overlap between AD and aspects of alcohol consumption are dependent on the specific patterns of drinking. For example, Polimanti et al identified a positive genetic correlation between AD and alcohol drinking quantity , but not frequency. Prior to the availability of large population studies and collaborative consortia efforts, few genes were reliably associated with AUD. The use of intermediate traits or endophenotypes has become increasingly common and hundreds of new loci have now been associated with alcohol use behaviors. Using intermediate phenotypes also facilitates translational research; we can mimic aspects of human alcohol use using animal models, including alcohol consumption, novelty response, impulsivity,hydroponics rack system withdrawal and sensitivity. Animal models provide an opportunity to evaluate the role of newly identified genes at the molecular, cellular and circuit level. We may also be able to perform human genetic studies of specific components of AUD such as DSM-IV AD criterion count and alcohol withdrawal. To date these traits have only been studied in smaller samples but this approach will be invaluable as sample sizes increase. Another challenge for AUD genetics is that AUD is a dynamic phenotype, even more so than other psychiatric conditions, and therefore may necessitate yet larger sample sizes. Ever-larger studies, particularly those extending mere alcohol consumption phenotypes, are required to find the genetic variants that contribute towards the transition from normative alcohol use to misuse, and development of AUD. Furthermore, genetic risk unfolds across development, particularly during adolescence, when drug experimentation is more prominent and when the brain is most vulnerable to the deleterious effects of alcohol. The Adolescent Brain Cognitive Development , with neuroimaging, genotyping and extensive longitudinal phenotypic information including alcohol use behaviors , offers new avenues for research, namely to understand how genetic risk interacts with the environment across critical developmental windows. Population bio-banks aligning genotype data from thousands of individuals to electronic health records are also promising emerging platforms to accelerate AUD genetic research. Despite these caveats, the GWAS described in Table 1 have already vastly expanded our understanding of the genetic architecture of alcohol use behaviors. It is evident that alcohol use behaviors, like all complex traits, are highly polygenic. The proportion of variance explained by genetic variants on GWAS chips ranges from 4 to 13%. It is possible that a significant portionof the heritability can be explained by SNPs not tagged by GWAS chips, including rare variants. For instance, a recent study showed that rare variants explained 1-2% of phenotypic variance and 11-18% of total SNP heritability of substance use phenotypes. Nonetheless, rare variants are often not analyzed when calculating SNP heritability, which can lead to an underestimate of polygenic effects, as well as missing biologically relevant contributions for post-GWAS analyses. Equally important is the need to include other sources of -omics data when interpreting genetic findings, and the need to increase population diversity. 

Therefore, a multifaceted approach targeting both rare and common variation, including functional data, and assembling much larger datasets for meta-analyses in ethnically diverse populations, is critical for identifying the key genes and pathways important in AUD.With the introduction of combination antiretroviral therapy mortality among HIV-infected patients diminished significantly. However, some patient subgroups have different survival patterns,and have shown less decline in death rates.8 These include patients with psychiatric or substance use disorders, which are highly prevalent among patients treated for HIV/AIDS.There is also a high co occurrence between psychiatric and substance use disorders among the HIV-infected,as in other populations; and severity is greater in each type of disorder when there is co occurrence.Together they place individuals at elevated risk for poor health outcomes. Because psychiatric and substance use disorders frequently co-occur, it is important to examine the combined impact of these disorders among people with HIV infection. Research among HIV-infected patients has shown an association between depression symptoms, HIV disease progression and mortality; and mental illness and substance abuse are barriers to optimal adherence to combination antiretroviral regimens.One study of U.S. veterans found that survival was associated with greater number of mental health visits.Yet few studies have examined survival patterns for HIV-infected individuals who use alcohol or illicit drugs, but are generally not injection drug users, and have been diagnosed with psychiatric or SU disorders from a private health plan; nor have studies examined both psychiatric and SU disorders in relation to mortality. Previous research has shown that access to psychiatric and SU disorder care among HIV-infected patients varies based on sociodemographic factors and HIV illness severity.The current study compares mortality in HIV-infected patients diagnosed with psychiatric disorders and/or SU disorders to patients without either diagnosis receiving medical care from a private, fully integrated health plan where access to care and ability to pay for care are not significant factors. We also ex amine the effects of accessing psychiatric or SU treatment services. Improvement in depression has been associated with better adherence to combination antiretroviral therapy and increased CD4 cell counts.Social support for HIV-infected patients has been associated with improved immune system functioning.Therefore, we hypothesize that accessing services is associated with decreased mortality among patients with HIV infection.We conducted a retrospective observational cohort study for years 1996 to 2007 among HIV-infected patients who were members of the Kaiser Permanente Northern California health plan. The KPNC is an integrated health care system with a membership of 3.5 million individuals, re presenting 34% of the insured population in Northern California. The membership is representative of the northern California population with respect to race/ethnicity, gender, and socioeconomic status, except for some under representation of both extremes of the economic spectrum.HIV infected patients are seen at medical centers throughout the KPNC 17-county catchment region. The study population consisted of 11,132 HIV-infected patients who received health care at KPNC at some time be tween January 1, 1996 and December 31, 2006. The study sample included all HIV-infected patients who were 14 years of age or older on or after January 1, 1996 and had at least 6 months membership during the first year of study observation. This minimum age was chosen be cause the KPNC membership has very few HIV patients under age 14, children are likely to receive different psychiatric diagnoses than adolescents and adults , diagnosis of SU problems generally occurs later than age 13, and children are likely to receive services for these disorders in pediatrics departments rather than in the health plan’s specialty psychiatry and SU treatment programs. Patients could enter the study until December 31, 2006. In the data analyses, we also excluded 83 patients whose SU disorder diagnosis status was unclear. This resulted in a study analysis sample of 9751 patients.Since 1988, the KPNC Division of Research has maintained a surveillance system of patients who are HIV-1– seropositive, ascertained through monitoring electronic inpatient, outpatient, laboratory testing, and pharmacy dispensing databases for sentinel indicators of probable HIV infection. HIV-1 seropositivity is then confirmed through review of patient medical records.

The total PCC is the sum of per capita consumption of each beverage type

We acknowledge that engagement with the content on social media does not equate to attributing the links content to be accurate by the user. Additionally, our study has the underlying assumption that all social media platforms allow users to engage similarly without accounting for the unique experience and engagement dimensions each platform offers. Further investigation into this topic may allow for better stratification of how users engage with male infertility content.Since the late 1990’s, there have been dramatic increases in alcohol-related problems in the United States. Between 1999 and 2016 annual deaths from liver cirrhosis increased by 65% and doubled for liver cancer . Relatedly, from 2006 to 2016 the death rate from alcoholic liver disease increased by over 40% from 4.1 per 100,000 to 5.9 per 100,000 . An increase of nearly 62% in alcohol-related emergency department visits was also found between 2006 and 2014 from 3,080,214 to 4,976,136 visits per year, with the increase occurring predominantly among people aged 45 and older . Further, an analysis of data from two waves of the National Epidemiologic Survey of Alcohol and Related Conditions showed a nearly 50% increase in the prevalence of past year alcohol use disorder from 2002 to 2013 among adults aged 18 and above . Surprisingly, these increases in alcohol-related morbidity and mortality did not occur alongside notable increases in per capita alcohol consumption estimates. These estimates, based on beverage sales data collected by the Alcohol Epidemiologic Data System , increased by approximately 6% over the 2002- 2013 time period . This represents an increase of approximately 28 drinks per person per year .

This increase seems insufficient to explain the observed increases in alcohol-related morbidity and mortality,flood and drain tray as we would expect a notable increase given that the heaviest drinkers consume the vast majority of alcohol . Indeed, the increase in the rate of alcohol-related ED visits between 2006 and 2014 was considered unrelated to the concomitant 1.7% increase in PCC . A possible explanation for the discrepancy between alcohol-related problems and PCC may lie in how PCC estimates are calculated. Per capita alcohol consumption is typically constructed as an aggregate measure using national and state population estimates from the U.S. Census Bureau and alcohol sales data . The state-level alcohol sales figures are from either state-provided taxable withdrawals from bonded warehouses or industry sources for states that fail to provide data. Alcohol sales-based consumption estimates are considered more complete and objective than survey data on alcohol use, which is subject to substantial under-reporting . This consideration is also due to the widespread availability of alcohol tax information and the low level of unrecorded alcohol use in the U.S. . However, the precision of typical PCC estimates is challenged by the fact that they use invariant estimates of the mean percentage of alcohol by volume , i.e. they do not use annual estimates of the alcohol content of the beer, wine, and spirits sold in each state to convert beverage volume into ethanol. The conversion factors used in the typical PCC estimate approach are based on estimates of %ABV for each beverage type and have not been updated since the 1970s. These values are 4.5%, 12.9%, and 41% for beer, wine, and spirits, respectively. Further complicating the issue is that each beverage type is comprised of several sub-types each with different %ABVs. Thus, actual PCC is also influenced by changes over time and place in beverage sub-type preferences.

Failing to acknowledge these changes in %ABVs and beverage preferences risks underestimating important changes in actual PCC that could potentially explain observed changes in alcohol-related morbidities and mortality. Additionally, PCC estimates are key to the estimation of the alcohol-attributable morbidity and mortality used to assess the global burden of disease due to alcohol . Indeed, PCC estimates are the marker against which the estimation of an exposure distribution of alcohol are based . Our previous work has demonstrated meaningful changes in the alcohol content of beer, wine, and spirits during the last half of the 20th century. The mean %ABV of beer and spirits sold in the U.S. have each declined between 1950 and 2002 . The %ABV of wine declined between 1950 and the mid- 1980s to 10.5%, where after it began and continued to increase to 11.5%. Beyond 2002 there is reason to believe there have been further changes in the %ABVs of beverage types with the emergence of high %ABV craft beer and a likely continued increase in the %ABV of wine . The aim of this paper is to extend our previous work estimating the mean alcohol concentration of the beer, wine, and spirits sold in the U.S. and PCC to the period 2003 to 2016. We present the variation in %ABV over this time period for each beverage type and examine this variation in light of changes in beverage sub-type preferences and mean %ABV. We compare PCC estimates based on ourABV-variant methods to estimates from ABV-invariant methods nationally and for each state. Data on the %ABV of specific wine brand and varietal were obtained from Washington State Liquor Control Board Price Lists for the years 2003 – 2012. As WSLB did not produce these price lists after the privatization of alcohol sales in 2012, we used the Liquor Control Board of Ontario’s website to identify %ABV. In the case that a specific brand varietal for a specific year could not be identified in either of these sources, we used the winery-reported value as reported on their websites.

As previously described , we did this for each brand varietal accounting for the top 80% of wine sales in Pennsylvania for each wine sub-type. There were many thousands of brand varietals sold comprising the largest category “table wine”, and an increasing number of brands each year. Further,hydroponic tables canada this methodology of identifying %ABV for each varietal has been critiqued as too labor intensive . To address the labor-intensity of this process, in this update of %ABV estimates and PCC we matched sales and %ABV for the top 50% of table wine sales in Pennsylvania, and calculated a mean %ABV for 30% of the total sales of table wines. We calculated a mean %ABV for the most commonly sold varietals, which were chardonnay, cabernet sauvignon, merlot, and zinfandel, by obtaining the %ABV for all the wines listed in these varietal categories, excluding those already included in the top 50%. We applied this mean %ABV to 30% of the total sales volume thus increasing our mean %ABV estimate to include 80% of the total. This was feasible for each year from 2003 to 2011 because the Washington Price lists were available and included %ABV values for each brand varietal in each top-selling varietal category. For the years from 2012 to 2016 we carried forward the 2011 %ABV value representing the mean of the most commonly sold varietals and applied it to each year’s 30% value of total sales volume. Data sources for spirits. We used the Liquor Handbooks to obtain data on the leading brands, the volume sold of each, and state and national annual market shares of each spirits sub-type . Spirits sub-type categories were straight whiskey, blended whiskey, Canadian whiskey, Scotch whiskey, Irish whiskey, gin, vodka, rum, tequila, brandy & cognac, cordials & liqueurs, and prepared cocktails. We obtained %ABV values for each brand within each spirits sub-type from the WSLB Price Lists for the years 2003 – 2012 and from the NABCA database for the years 2013-2016. If the %ABV could not be identified from these sources we used values from the distillery’s website. Other data sources. We used sales figure data for 2003-2016 from the Alcohol Epidemiologic Data System for the volume of each beverage type sold for each state and nationally for each year . These figures are based on tax receipts and industry sources. We obtained estimates of the United States population aged 15 and older for each state and nationally from 2003 to 2016 from the U.S. Census Bureau . The AEDS figures presented here based on the ABV-invariant method are not the same as those in the AEDS Surveillance reports because here they are referenced to the population aged 15 and older, while AEDS reports used figures for the population aged 14 and older. Estimating sales-weighted mean % ABV. To estimate the sales-weighted mean %ABV for each beverage sub-type for each year we 1) multiplied the %ABV for each leading brand by the volume sold , 2) took the sum of these product values and 3) divided this sum by the sum of the volume sold.

To estimate the mean %ABV for each beverage type we multiplied the annual market share of each beverage sub-type by the sales-weighted mean %ABV of that sub-type and summed across all beverage sub-types for each state for each year, and nationally for each year. Estimating per capita alcohol consumption. Nationally and for each state we calculated PCC estimates for each beverage type by multiplying the mean %ABV by the volume of each beverage type sold and dividing by the population aged 15 and above.To be consistent with international standards, we present PCC estimates in liters. We describe our %ABV estimates for beer, wine, and spirits, their trends between 2003 and 2016, and make comparisons to the static %ABV values used in the AEDS PCC calculations. To explain the trends in %ABV estimates for each beer, wine, and spirits, we describe the mean %ABV and market shares for beverage sub-types. We describe our beverage-specific and total PCC estimates and trends, and make trend comparisons to estimates from the AEDS ABV-invariant methods. We present national estimates as described above followed by a brief overview of state estimates. National %ABV estimates for beer, wine, and spirits. Our estimates of the mean %ABV of beer, wine, and spirits sold in the United States between 2003 and 2016 are presented in Figure 1. Overall, the means for all beverage types increased over the 2003-2016 period from 4.65% to 4.74 %ABV, 11.6% to 12.3 %ABV, and 36.9% to 38.3 %ABV for beer, wine, and spirits, respectively. For beer, the overall trend in mean %ABV was a decline between 2003 and 2005, a small increase in 2006 followed by a steady decline until 2010, after which there was a notable increase until 2015 and a slight decline to 2016. Our estimates were consistently higher than the time-invariant 4.5 %ABV value used for every year in AEDS, with the largest difference of 0.25 percentage points in 2015. For wine, the overall trend in average %ABV was a stable value between 2003 and 2007, then a sharp increase until 2010 after which it declined slightly and remained relatively stable until 2016. Our estimates were lower than the time invariant 12.9 %ABV value for every year in AEDS but the difference decreased over time as our estimates increased. For spirits, the overall trend in mean %ABV showed a steady increase between 2003 and 2014, with a slight dip in 2015 and an increase in 2016. Our estimates were consistently lower than the static AEDS estimates, although differences decreased over the time period as our estimates increased. National mean %ABVs and market shares for beverage sub-types. The changes we observed in our national estimates of mean %ABV of each beverage type were influenced by changes in the sales-weighted mean %ABVs of beverage sub-types and changes in beverage sub-type market shares over time, that is,changes in beverage sub-type preferences. The %ABVs and market shares are presented for selected years for each beverage type in Table 1. The initial decrease in the mean %ABV of beer between 2003 and 2005 was driven by declines in market shares and not %ABV as beer sub-types’ mean %ABV changed by no more than 0.03 over this time period. Premium beer and popular beer had the second and fourth largest market shares in 2003, respectively, and each lost about 12% of their market shares by 2005. On the other hand, the increase in the national mean %ABV of beer between 2005 and 2006 was the result of an increase in the mean %ABV of malt beverages, which increased from 6.14% to 6.68 %ABV.

MOE results have replicated those behind existing guidelines for low-risk drinking

The associated chronic organ damage exponentially increases in risk as alcohol consumption accumulates over time. Unmanaged heavy drinking is associated with subsequent heavy drinking, often culminating in brain damage, itself a consequence of heavy drinking but also a driver of future behaviour. Alcohol consumption itself is close to log-normally distributed in drinking populations, skewed towards heavy drinking. There is no natural cut-off point above which “alcohol use disorder” definitively exists and below which it does not. “Alcohol use disorder” is clinically defined as a score on a checklist of symptoms, and there is a smooth line exponential relationship between levels of alcohol consumption and the score on the checklist. Heavy drinking is a cause of the items on the checklist, including compulsion to drink more, which can also be a consequence of brain damage, itself caused by heavy drinking. Thus, “alcohol use disorder” is a diagnostic artefact. No more is needed to consider what is called “alcohol use disorder” other than heavy use over time. For alcohol , this approach does not imply that heavy use over time is the only cause of harm. There are other factors involved that that drive heavy alcohol use and harm that are independent of, or in interaction with, molecular and cellular levels , individual levels and environmental levels There is an ongoing discussion as to whether or not sugar is an ‘addictive’ substance that should be captured in the same category as drugs. Framing the problem as one of heavy use over time provides insight into this debate. As with alcohol and high blood pressure,roll bench chronic disease risk associated with plasma glucose levels has a continuous exponential relationship with sugar consumption.

The distribution of blood glucose levels is close to log-normally distributed in populations and skewed towards high consumption levels. There is no natural cut-off point above which diabetes linked to sugar definitively exists and below which it does not. Similar to the alcohol model where heavy use of alcohol over time leads to further heavy use of alcohol from the resulting brain damage, heavy use of sugar over time damages hippocampal function, which leads to further heavy use of sugar over time. Thus, in the ‘heavy use over time’ frame, sugar can be placed in the same category as alcohol and other drugs, and managed with similar governance approaches that promote public health.A core way to document the interference of drugs in human biology and functioning is to use quantitative risk assessment . QRA is a method applied in regulatory toxicology, for example, to evaluate water contaminants, and before safety approvals for food additives or pesticides. QRA has not been widely applied to drugs. Previous approaches for ranking harm have mostly been based on expert judgements which have been criticized as being arbitrary and biased. The advantage of QRA is that it provides a formal scientific method to rank the harm-potential of drugs, making optimum use of available data. There are several approaches for QRA available, with Margin of Exposure suggested by WHO as being most suitable for prioritizing risk management. In the alcohol field, MOE has been applied to evaluate the liver cirrhosis risk of ethanol, which is the single most important chronic disease condition attributable to alcohol globally.In a detailed study of the components in alcoholic beverages, ethanol was confirmed as the compound with highest risk. In a detailed comparison between ethanol and non-metabolically produced acetaldehyde contained in beverages, it was also judged that the risk of ethanol comprises more than 99% of the total risk.

It can be concluded that the risk of alcoholic beverages can be evaluated by looking at the effects of ethanol alone. The situation is less clear for tobacco, for which some industry MOE studies find toxicants other than nicotine. An MOE analysis of electronic cigarette liquids indicated that nicotine is the compound posing the highest risk. MOEs are calculated as the ratio of a toxic dose of the drug with the dose consumed either individually or on a population scale. The higher the MOE, the lower the level of risk, with low risk not implying safety. An MOE of 100 means that the drug is being consumed at one hundredth of the benchmark dose; an MOE of 1 means that the drug is being consumed at this toxic dose. The MOE for drugs can be calculated taking into account a range of hazard outcomes in health and other well-being domains, as long as suitable dose-response data are available . Therefore, analyses to date are primarily restricted to lethal outcomes based on animal studies. Results for European adults are summarized in Figure 1. The low MOE for alcohol is due to the high levels of consumption by European adults. The MOE results are consistent with the consensus of expert rankings in which cannabis is ranked with lower risk and alcohol with higher risk than current policies assume. The MOE is inherent to the drug itself; it does not account for the harms that arise from drug delivery systems, for example, smoked tobacco, or from secondary effects such as unclean syringes used for heroin intake. Of course, MOE, as presented here, focuses on the physical body of the adult user as the locus of harm. It does not take into account the sex and age of the user, or harm to individuals other than the user or at collective levels, which are a primary source of social differentiation between drugs. It also focuses on mortality, rather than intoxication in the moment. Differences between the intoxicating power of substances in the moment, and in the behavioural consequences of taking them, are primary reasons why, for example, societies have treated alcohol differently to tobacco.

Nevertheless, we believe that MOE should be applied at the current stage even when the underlying toxicological data are incomplete, to provide a better alignment of prioritization of policy to the drugs associated with higher risks, which in this case are nicotine, cocaine, heroin and alcohol.We have described three harmonizing approaches to reframe our understanding of addictions: biological predisposition to seek out psychoactive substances; heavy use over time as a fruitful characterisation; and quantitative risk assessment. Here, we propose two underlying pillars for a re-design of the governance of drug controls: embedding drugs governance within a comprehensive model of societal well-being; and creating a health footprint which, modelled on the carbon footprint,drying rack cannabis promotes accountability by identifying who causes what harm to whom from drugs.We propose that societal well-being should be our overarching frame for a more integrated governance and monitoring of drug control policies. Societal well-being, as captured by OECD, includes quality of life , material conditions and sustainability over time . Gross domestic product is included as a separate domain, recognizing that, while economic well-being is an important component of societal well-being, GDP has significant limitations. GDP excludes, for example, non-market household activity such as parenting, and activities such as conservation of natural resources. GDP also includes activities which do not contribute to well-being, such as pollution and crime, termed regrettables that are depicted within GDP but outside well-being. The use of and harm done by drugs are affected by and affect all well-being dimensions. Well-being analyses have found that, whilst some illegal drug policies may reduce health harms, they often come with adverse side effects including criminalization, social stigma and social exclusion, all of which exacerbate health harms. Humans are hard-wired to be social animals, with social networks strongly influencing tobacco use and alcohol intake. Punitive drug policies bring about the opposite: social exclusion due to stigma and social isolation. Engagement with illegal drugs conveys especially strong social meanings and can lead to stigma of marginalized heavy users, as opposed to the supposedly more responsible mainstream users. This can lead to punitive societal responses. Meanwhile, exclusion from the mainstream may allow harms to continue unchecked. If a user is caught using drugs in a country with “zero tolerance” to illegal drugs, the ensuing criminal sanctions will impede civic engagement and any improvements in quality of life and material living conditions.

For more detail, see ‘Well-being as a frame for understanding addictive substances’ by Stoll & Anderson. Changes in life expectancy in Mexico illustrate the negative consequences of criminalization. After six decades of gains in life expectancy in Mexico, the trend stagnated after 2000 for both men and women, and for men was reversed after 2005. This was largely due to an unprecedented rise in homicide rates, mostly as a result of drug policies promoting ‘gang wars’ and conflicts between gangs, the police and army. A well-being frame calls for whole-of-society approaches that progressively legalize illegal drugs to reduce violence and personal insecurity, while focusing on substances as drivers of harm. It balances the complex factors impacting drug use and related harm through the continuous monitoring of policy effects in a proactive way, with regulations embedded in international coordination. It calls for whole-of-society approaches that avoid criminalization where possible and where costs of addressing the problem are equally distributed across society. Governance strategies manage nicotine, illegal drugs and alcohol as a whole to avoid overlaps, contradictions, gaps and inequalities. The concern should be focused on harms, both to the user and to others, including family and friends, communities and society as a whole. The structures to support the strategies should be coordinated and multi-sectoral, involving high-level coordination of health, social welfare, and justice agencies in the context of international treaties, and, importantly, equitable across the lifespan, between genders and cultural groups. To increase the pace of policy change, regional and local public policies can create policy communities and networks within a common international framework. Managing ‘wicked problems’ requires clear rules of private sector engagement in policy making, particularly when private interests go against societal well-being. An evolved governance system must include measures to avoid industry co-optation, through transparency, checks and balances. Private sector stakeholders should operate within established rules.The ongoing monitoring of outcomes within a well-being framework would promote accountability. Modelled on the carbon footprint, we propose a health footprint as the accountability tool. Footprints were developed in the ecological field as a measure of human demand on ecosystems, including water footprints and carbon footprints that apportion greenhouse gas emissions to certain activities, products and populations. The central reason for estimating a carbon footprint is to help reduce the risk of climate change through enabling targeted and effective reductions in greenhouse gas emissions. The health footprint can be considered a measure of the total amount of risk factor attributable disability adjusted life years of a defined population, sector or action within a spatial or temporal boundary . It can be calculated using standard risk factor-related YLL and DALY methodologies of the Global Burden of Disease Study and of the World Health Organization. Health footprints are a starting point. To be accountable, we ultimately need to understand what drives the health footprint .Above the health footprint of Figure 3 are the structural drivers of harm that directly influence the size of the health footprint. Biological attributes and functions include, for example, the biological pre-disposition to seek out and use drugs. Genetic variants, for example, could be those that affect the function of alcohol dehydrogenase, influencing consumption levels and harm. Changes in global population size and structure can increase absolute numbers of drug-related DALYs, even though rates per person can decrease over the same time. As sociodemographic status improves in lower income countries, so do drug-related DALYs; yet, for the same amount of drug use, people with lower incomes suffer more drug-related DALYs than people with higher incomes.Above the structural drivers are the circumstantial drivers, those that can change. Related to drug potency and exposure, an MOE target for all drugs no greater than 10 has been argued6 . Policies could achieve such a result by either reducing drug exposure or by reducing the potency of the drug. Technological developments have led to electronic nicotine delivery systems as widespread alternatives to smoked tobacco, with current best estimates showing e-cigarettes to be considerably less harmful to health than smoked cigarettes. It may be that once e-cigarettes are heavily produced and marketed by the tobacco industry, that society will see cigarette-like levels of sustained heavy use of nicotine.

Distress tolerance is one such potentially important transdiagnostic variable

Despite Ghana’s challenges, much progress has been displayed through Mind Freedom and BasicNeeds’ community and awareness work, Dr. Dzadey’s implementation of therapy and creation of the Drug Rehabilitation Unit, and Dr. Osei’s repatriation of the Accra Psychiatric Hospital. Though Mind Freedom commended the repatriation of patients, Basic Needs is arguing that there should have been a half-way home or reintegration centre set up to prepare the patients, who might have spent 20 or more years at the hospital, to live an independent life before being returned home. That would have been ideal; however, it is unrealistic because it would have taken a long time to create the rehabilitation centre and the hospital needed to be decongested as quickly as possible. The Castle Road Special School, built in 1968 and directed by Isaac Ben Roosevelt Gadoter, is the only special needs school in Ghana that is located in a Psychiatric Hospital. The school provides hands-on therapy, art, reading, music, outdoor activities for the mentally ill or disabled in the Children’s Ward at the Accra Psychiatric Hospital. The teachers there represented one of the very few instances when I saw true compassion for the mentally ill/disabled during my time in Ghana and one of the even rarer instances when I heard that someone loved their occupation at the psychiatric hospital. After volunteering at another special needs school for children with autism, learning delays, hearing and speech problems, SENCDRAC, I luckily witnessed even more sympathy and care for the unique children in Ghana. There are 14 other registered special needs schools in Ghana, and they are at the forefront of displaying empathy for the mentally ill and disabled in the country.

Hopefully, this sympathy will spread to mainstream schools and then to the entire public. The infrastructure of mental health services is reliant on satisfactory funding and allotting sufficient finances to allow for the delivery of notable mental health services,grow cannabis in containers the effectual training of staff, and the development of collaborations and consultations which will make mental health service much more accessible. Though the health sector in general is underfunded, it is imperative that the Ministry of Health allocates funding to community mental health care and that the financing of the psychiatric hospitals becomes based on need, rather than unjustified ceilings, due to the vulnerable nature of the mentally ill. The Mental Health Bill will guarantee that at least eight percent of the total health budget will be apportioned to mental healthcare. The government is responsible for addressing the needs of its citizens by formulating suitable legislations and the Mental Health Bill offers the government a chance to enhance the delivery and accessibility of mental health services. The World Health Organization is calling the bill one of the best mental health laws in the developing world and believes that when it is passed it can serve as a model for other countries. The bill needs to be passed in order to avoid the collapse of a currently unstable mental health care system. The Mental Health Bill, Dr. Osei, MindFreedom, and BasicNeeds all promote the extension of psychiatric services into community district and regional hospitals. Integrating mental health services into primary care has shown to be more cost effective than institutional care. This integration will also help improve access to mental health services in remote areas where patients presently travel a great number of miles for psychiatric treatment. Currently, care is mainly restricted to the institutional administration of psychotropic drugs instead of preventative or rehabilitative psychosocial interventions, due to the dearth of allied mental health personnel and the limited number of community psychiatric nurses. An accelerated, specialist training program should be locally established in order to increase the number of allied mental health personnel.

The problematic brain drain of staff could be alleviated by providing satisfactory remuneration and incentives to encourage trained personnel to stay in Ghana or to return home from overseas. If a mental illness goes untreated, there are three possible consequences for the victim. The first is living with the sickness and underachieving or having low productivity because the person is not performing properly or to their highest potential. Secondly, the untreated person could engage in social vices such as drugs, armed robbery, and paedophilia. The third possibility is to die from complications of the illness, i.e. committing suicide due to depression, engaging in risky activities due to bi-polar disorder, not eating because of schizophrenia, or dying from a tumour that initially caused the illness. Each day that the bill remains before Parliament, Ghana is officially allowing the rights of the vulnerable to be abused by placing patients in overly congested institutions with little doctor-patient contact. A society of acceptance makes a much more favourable environment for recovery from mental illnesses, with stigma representing a large barrier to recovery. Even in developed countries, people who are misinformed about mental illnesses can respond negatively to a friend or relative’s mental illness. Mental illness is not caused by poor decisions or by offending the gods, but can affect anyone no matter what ethnicity, background, age, or gender. The mentally ill can benefit from psychotherapy, group therapy, medication, self therapy, rehabilitation, and the acceptance and understanding from friends and family. Programs that encourage understanding and awareness of mental health issues and demystify mental illness should be forcefully undertaken for communities to further tolerate and acknowledge the mentally ill. Overcoming these widely prevalent traditional myths on mental illness will help lead more patients to seek professional treatment early on. Public health officers and the health promotion unit should integrate mental health into their awareness and advocacy programs.

Mental health needs to be recognized and integrated into both primary and secondary care, social and health policy, and health system organization. The delivery of mental health care can also be improved by concentrating on currently active programs dealing with the prevention and treatment of tuberculosis, malaria, HIV, domestic violence, and maternal care. This should spark the interest of the government because advancing the mental health system could help the country reach the Millennium Development Goals which address HIV/AIDs, malaria, tuberculosis, child mortality, maternal health, and the empowerment of women. It has been consistently reported that HIV is associated with poor mental health due to psychological trauma and the causing of neuropsychiatric complications such as depression, cognitive disorder, mania, and dementia due to effects on the central nervous system. Strong evidence from developed countries also shows that depression,pot for cannabis alcohol and substance abuse disorders, and cognitive impairment negatively affect adherence to antiretrovirals. In the US, those treated for depression for six months showed improvement in HAART adherence compared to those who did not take antidepressants. Some studies have also shown that the incidence of tuberculosis infection is high in people with serious mental illnesses or substance use disorders. Heavy drinkers had double the risk of being infected with tuberculosis compared to non-drinkers, according to a study in the US. Though there is little evidence, depression might also cause low adherence to anti-tuberculosis medication, which makes it very difficult for a country to control the disease. With gynaecological health being greatly affected by depression, anxiety, sexual and domestic abuse, and substance and alcohol use, many studies have also linked reproductive morbidity with mental illnesses. Depression is more common among women, especially poor women, due to domestic violence and lack of autonomy. Maternal psychosis increases the risk of infant mortality while maternal schizophrenia can result in low birth weight or premature delivery. Postpartum depression also leads to poor mother-infant interaction and little devotion to the health of the child. Mental disorders increase the risk for transmission of infectious disease and the development of non-communicable diseases and communicable diseases, while other sicknesses increase the risk for mental illnesses. Because of this co-morbidity, mental health policies should be integrated into different levels of care, with primary care physicians trained in treating mental disorders. Current community and public health programs or campaigns should become familiar with mental disorders in order to help improve both the physical and mental health of their targeted patients, which will lead to lead to quicker recoveries. If general physicians and prominent health-related NGOs start to increase awareness and encourage or participate in the treatment of mental disorders, a great deal of pressure will be taken off of the limited mental health staff in Ghana .

DIFFICULTY ADHERING TO LONG-TERM antiretroviral regimens is a well-established and primary cause of treatment failure among individuals living with human immunodeficiency virus . Fundamentally, patient behaviors are paramount to effective HIV management such as establishing optimal lifelong adherence to medications , and consistent attendance at HIV clinic appointments . These adherence-related behavioral requirements often occur in the face of stigma-related distress and negative affect and/or aversive and unwanted side effects from the medications themselves . Indeed, the literature is rife with data indicating that ART side effects are strongly related to poor ART adherence . In addition, there is substantial evidence that negative affect is also associated with ART non-adherence . Accordingly, an inability to tolerate negative affect may interfere with ART adherence and persistence. Given the enduring prevalence and clinical significance of sub-optimal ART adherence among HIV infected individuals , examination of malleable transdiagnostic processes related to indices of HIV management is critical from an intervention standpoint.Here, and throughout the literature, distress tolerance is defined as perceived and/or behavioral persistence in the presence of unpleasant stressors or emotional/physical states . Distress intolerance is characterized by the tendency to rapidly alleviate or escape negative emotional experiences when in crisis or distressing situations, which interferes with engaging in goal-oriented actions . Distress intolerance has been established within various models of problematic behaviors and psychopathology ; hence its consideration as a transdiagnostic psychological vulnerability factor. Accordingly, in the context of HIV management, one’s ability to effectively tolerate distress is crucial because discomfort and/or distress are part of the treatment process and cannot be altogether avoided . Attempts to avoid discomfort and distress may lead to suboptimal ART adherence , with suboptimal adherence defined as less than 95% adherence to older regimens and less than 80% adherence to newer regimens . Suboptimal ART adherence may, in turn, lead to eventual increases in viral load and potential ART-resistant HIV strains . To illustrate, one may experience difficulty sustaining adequate medication adherence if unwilling to tolerate negative emotions resulting from being reminded of living with HIV when taking ART medications. Thus, low tolerance of unpleasant affective states or behavioral tasks may be a clinically addressable risk factor for poor ART adherence and HIV disease progression. In addition to recent work showing perceived distress intolerance to be associated with psychological symptoms among individuals with HIV , a study conducted by O’Cleirigh and colleagues revealed that greater perceived distress tolerance was associated with better self-reported ART adherence and HIV disease management. Although this work represents an important first step in the literature, there is a lack of data on the relation between distress tolerance and ART adherence using objective adherence measures or relying on a multi-method approach to DT assessment. As there is inherent difficulty in participants accurately identifying motives for their behavior, along with the potential for inflated correlations with shared method variance , reliance on only self-report methodologies for examining distress tolerance may be problematic. As such, it is recommended to include both self-report and behavioral measures when assessing distress tolerance . To evaluate the explanatory role of distress tolerance as a transdiagnostic vulnerability factor potentially underlying several indices of HIV disease management, the present study sought to evaluate the relation between distress tolerance and ART adherence using objective measures of ART adherence, response to ART, and immuno compromise and two measures of distress tolerance . Behavioral distress tolerance measures evoke distress “in vivo” thereby capturing one’s objective capacity for tolerating distress, whereas self-report measures capture one’s “perceived” capacity for tolerating aversive and unwanted psychological experiences . Given the evidence that poor distress tolerance is associated with negative affectivity , and negative affectivity and ART side effects are associated with ART non-adherence , we also sought to clarify the association between distress tolerance and ART adherence when controlling for negative affectivity and ART side-effect severity.

Alcohol use disorder is a chronic relapsing disorder with a major public health impact

Despite the significant public health burden, there are few pharmacological treatment options, with only 4 medications receiving FDA approval. This contrast between public need and lack of approved pharmacological treatments does not highlight a lack of research; on the contrary, close to 2 dozen potential medications have reached clinical testing. It instead is largely owed to the presence of an expensive and burdensome medications development process, notoriously deemed the “valley of death,” whereby medications fail in their transition from preclinical to initial clinical testing. There is a second “valley of death” where medications fail to translate from early human laboratory efficacy into large-scale, ecologically valid clinical trials. Therefore, the practice quit attempt model aims to develop a novel early efficacy paradigm to more efficiently screen future AUD medication candidates. The present study will utilize naltrexone , an FDA-approved medication for AUD, to serve as an active control to test both the practice quit attempt paradigm and the efficacy of Varenicline . In relation to the former, NTX is an FDA-approved opioid antagonist with high affinity for both the mu-opioid and kappaopioid receptors. With its endogenous opioid blocking effects, NTX has been found to be associated with reduction in both alcohol craving and consumption. These effects make NTX an excellent candidate for the practice quit paradigm. VAR is an FDA-approved medication for smoking cessation that has been associated with reduction in alcohol cravings in previous animal and human laboratory studies.

Based on these findings and in combination with past literature, VAR poses a potential benefit as an AUD pharmacological therapy and,cannabis grow system subsequently, an appropriate experimental medication within the practice quit paradigm. Earlier screening models for phase 2 medication trials largely lack the ecological validity needed to construct clinically meaningful endpoints for treatment-seeking individuals. This practice quit study differs from previous trials in its introduction of a paradigm that displays assay sensitivity via placebo controls, a superiority comparison between an FDA-approved medication and an experimental candidate, increased ecological validity as participants are asked to quit drinking in the real-world and not only evaluated in the laboratory setting, similar to what is seen in large scale RCTs, and an alcohol CR assessment to validate the sensitivity of the paradigm for detecting medication effects. The successful completion of this study will advance medications development by proposing and validating a novel early efficacy model for screening AUD pharmacotherapies, which in turn can serve as an efficient strategy for making go/no-go decisions as to whether to proceed with clinical trials. Specifically, a valid model of initial efficacy will allow us to reliably detect an efficacy signal for AUD pharmacotherapies, and in turn decide whether to proceed to the full-scale efficacy testing.Over 14 million adults in the United States have an AUD; however, only 8% of adults with current AUD received treatment. Only four pharmacotherapies are currently approved by the Food and Drug Administration for the treatment of AUD, and these medications are only modestly effective with number needed to treat ranging from 7–144 across studies. Therefore, there is a clear need to develop more efficacious treatments, particularly those with novel molecular targets.

To that end, the modulation of neuroimmune signaling is a promising AUD treatment target. A growing body of literature indicates that the neuroimmune system may play a critical role in the development and maintenance of AUD, termed the neuroimmune hypothesis of alcohol addiction. In animal models, chronic alcohol consumption induces a neuroimmune response through the activation of microglia and increased expression of pro-inflammatory cytokines and neuronal cell death. Elevated microglial markers have been identified in the postmortem brains of individuals with an AUD, and pro-inflammatory cytokine levels are higher in individuals with AUD compared to controls. Neuroinflammation has also been implicated in mood disorders. Moreover, mood states are considered to be a central feature of AUD, with a negative mood state emerging with increasing AUD severity. Interactions between inflammatory pathways and the neurocircuitry activated in depression and addiction are thought to contribute to negative mood. Therefore, a neuroimmune modulator may treat AUD and related negative mood symptoms through similar pathways. Ibudilast shows promise as a novel AUD pharmacotherapy. IBUD reduced alcohol intake by 50% in two rat models, and selectively decreased drinking in alcohol-dependent mice relative to non-dependent mice. In a human laboratory trial, treatment with IBUD was well-tolerated and resulted in reductions in tonic craving and improvements in mood reactivity to stress and alcohol cue exposure compared to placebo. IBUD is a selective phosphodiesterase inhibitor, with preferential inhibition of PDE3A, PDE4, PDE10A, and PDE11A, and a macrophage migration inhibitory factor inhibitor. Both PDE4 and MIF are involved in neuroinflammatory processes through the regulation of inflammatory responses in microglia, and PDE4Bexpression is upregulated after chronic alcohol exposure. Therefore, IBUD is thought to reduce neuroinflammation through the inhibition of these pro-inflammatory molecules.

IBUD crosses the blood–brain barrier, and is neuroprotective as it suppresses the production of pro-inflammatory cytokines and enhances the production of anti-inflammatory cytokines. While IBUD is a promising AUD pharmacotherapy, its underlying mechanisms of action on the human brain remain largely unknown. PDE4 is highly expressed in neuronal and non-neuronal cells including glia in brain regions associated with reward and reinforcement, including the ventral striatum, and PDE4 can directly regulate dopamine in the striatum in mice. Functional magnetic resonance imaging alcohol cue-reactivity paradigms have commonly been used to evaluate if pharmacological AUD treatments alter brain activation in reward processing circuity. Alcohol cue-elicited reward activation is predictive of treatment response; thus demonstrating that functional neuroimaging can provide mechanistic data for AUD pharmacotherapy development. This may be particularly relevant in the case of IBUD, where the mechanism of action as an AUD treatment is currently unknown, but can be hypothesized to involve the striatum, which is activated in the alcohol cue-reactivity paradigm. Therefore, the present study sought to investigate the efficacy of IBUD to attenuate alcohol cue-elicited VS activation in individuals with AUD. The current study was an experimental medication trial of IBUD compared to placebo in non-treatment-seeking individuals with an AUD. To advance the development of IBUD as an AUD treatment,cannabis grow lights the present study examined the efficacy of IBUD, relative to placebo, to reduce negative mood and reduce heavy drinking as ≥5 drinks/day for men and ≥4 drinks/day for women over the course of 2-weeks. A micro-longitudinal design allowed for daily assessments during the course of treatment. We hypothesized that ibudilast would reduce negative mood and decrease heavy drinking over the course of the study. To investigate the neural substrates underlying IBUD’s action, the present study also examined the effect of IBUD on neural alcohol cue-reactivity. We hypothesized that ibudilast would attenuate alcohol cue-elicited activation in the VS relative to placebo. Finally, this study explored the relationship between neural alcohol cue reactivity in the VS and drinking outcomes.This study was conducted at an outpatient research clinic in a medical center. Participants were recruited through social media and mass transit advertisements. Initial screening was conducted through telephone interview, with eligible participants invited for an in-person assessment. Eligible individuals were between 21 and 50 years old who met criteria for a current DSM-5 mild-to-severe AUD. Participants were required to drink above moderate drinking levels, as defined by the NIAAA as >14 drinks/ week for men and >7 drinks/week for women, in the 30 days prior to screening. Exclusion criteria were: currently receiving or seeking treatment for AUD; past year DSM-5 diagnosis of substance use disorder ; lifetime diagnosis of schizophrenia, bipolar disorder, or any psychotic disorder; non-removable ferromagnetic objects in body; claustrophobia; and serious head injury or prolonged period of unconsciousness . Participants were excluded if they had a medical condition thought to interfere with safe participation and if they reported recent use of medications contraindicated with ibudilast.

Women of a childbearing age had to be practicing effective contraception and could not be pregnant or nursing. See Fig. 1 for the trial enrollment flow.Participants completed a series of assessments for eligibility and individual differences. These measures included the Structured Clinical Interview for DSM-5, the Clinical Institute Withdrawal Assessment for Alcohol Scale – Revised, and the 30-day Timeline Follow back Interview for alcohol, cigarette, and cannabis. Participants also completed assessments regarding their alcohol use, including: Alcohol Use Disorder Identification Test and Alcohol Dependence Scale, which measure severity of alcohol use problems, Penn Alcohol Craving Scaleand Obsessive Compulsive Drinking Scale, which measure alcohol craving, and the Reasons for Heavy Drinking Questionnaire to assess withdrawal-related dysphoria, indicated by question #6: “I drink because when I stop, I feel bad ”. Participants also completed measures of smoking severity and depressive symptomology. At each in-person visit, participants were required to have a breath alcohol concentration of 0.00 g/dl and test negative on a urine toxicology screen for all drugs of abuse . Blood pressure and heart rate were assessed at screening and at each visit. Participants completed three in-person study visits occurring on Day 1 , Day 8 , and Day 15 . Randomization visits occurred on Mondays and Tuesdays to ensure that participants were at the target medication dose by the weekend. Side effects were elicited in open ended fashion and were reviewed by the study physicians . Adverse events were coded using the MedDRA v22.0 coding dictionary. Treatment emergent adverse events were defined as adverse events that started after the first dose of the study drug or worsened in intensity after the first dose of study drug. Participants completed daily diary assessments, reporting on their past-day alcohol use, mood, assessed with a shortened form of the Profile of Mood States, and craving, assessed through a shortened form of the Alcohol Urge Questionnaire. Participants received daily text message reminders with links to these assessments.A set of generalized estimating equations with compound symmetric covariance structure were run in SAS 9.4 to account for repeated measures. GEEs were selected as the analytical method because parameter estimates are consistent even when the covariance structure is mis-specified. As such, a compound symmetric covariance structure was chosen. Of note, due to missing data on all outcome and predictor variables, two participants were naturally excluded via list wise deletion for the GEE analysis. A GEE model was first run to assess the effect of medication on negative mood. The dependent variable, negative mood , was treated as continuous so a normal distribution with identity link function was chosen. A compound symmetric covariance structure was chosen to account for the repeated assessments. Independent variables for these analyses were medication , drinking day , and the interaction of medication by drinking day. Sex, age, depressive symptomology , and smoking status were examined as covariates; only significant covariates were retained in the final model to improve model clarity and ease of replication. A similar model was conducted to assess the effect of medication on craving, with the dependent variable being craving as measured by the AUQ. For both analyses, predicted means, standard errors, and 95% confidence intervals for negative mood and craving were calculated based on final models. The dependent variables for the drinking analyses were binary, such that 1 indicated a heavy drinking day or drinking day and a 0 indicated no heavy drinking or drinking, respectively. A binomial distribution with logit link function was chosen to model the binary dependent variable . Since participants were not on medication at baseline , this time point was excluded from the analysis. Independent variables included in the models were medication , time , and the interaction of medication by time. Baseline drinking information were also included in the model as a control. As above, sex, age, depressive symptomology , and smoking status were examined as covariates; only significant covariates were retained in the final model to improve model clarity and ease of replication. For both analyses, predicted probabilities, standard errors, and 95% confidence intervals for heavy drinking and any drinking were calculated based on final models. A general linear model was used to evaluate the effect of medication on VS activation. The dependent variable was VS percent signal change between ALC and BEV blocks. Medication was the independent variable. Age, sex, depressive symptomology , and smoking status were examined as covariates; only significant covariates were retained in the final model. Finally, to evaluate if VS activation interacted with medication in predicting drinking in the week following the scan, a between-subject factor for VS activation was added to the model, along with a medication by VS activation split interaction.