Category Archives: cannabis grow equipment

Independent samples t-tests probed significant group × time interactions within these 5 clusters

One summary measure from each test was chosen a priori as the best estimate of the function of that test. We factor analyzed the test battery to reduce the number of variables. Supplementary text and Table S1 provide extensive detail on the battery. We examined missing data prior to implementing multiple imputation. From a sample of 1043, 953 received baseline neurocognitive testing . Of the CHR sample that transitioned to psychosis during the two-year follow-up , 89 received testing . Overall data completeness for the tested sample was 96.6% for 19 test variables. After MI, we conducted a FA of the 19 neurocognitive variables . All analyses were done with SPSS, version 23.Groups were HCs, CHR converters and non-converters . T-tests, Kolmogorov-Smirnov Z and Chi Square tests were used to assess demographic comparability. Due to differences in age and maternal education, we controlled for both using MANCOVA and also controlled for site as a random effects factor with a linear mixed model. We covaried for estimated and premorbid IQ to test the role of general intellectual ability in cognitive dysfunctions. We compared medicated vs. Non-medicated groups of CHR +C vs HC, and CHR+C vs CHR-NC by conducting MANOVA with planned comparisons using residualized factor scores generated from the linear mixed models. To examine group cognitive profiles we residualized out age and maternal education from all neurocognitive indices . Area under the curve was calculated by the ROC program in SPSS. Prediction of conversion to psychosis and time to conversion was assessed by logistic and Cox regression. Covariates were selected based on similar prediction analyses conducted in NAPLS-164 and NAPLS-245 and entered into the model if they were associated with survival time and predicted conversion status in logistic regression. Survival time was time to the last SIPS interview or conversion, ebb and flow rolling benches whichever occurred first. Candidate covariates were added to the model as a block then subjected to backward selection with a criterion p value of 0.10. Candidates that survived at p ≤ .05 within domain were entered into an omnibus model.

ESs were calculated with Cohen’s d. Bonferroni corrected significance for mean comparisons was set for individual tests at p<. 00263 and for factors at p< .0125 .In the largest and most detailed study of CHR prodromal cases, using a multi-site, case control design and standardized assessments, we demonstrated that individuals at CHR were impaired in virtually all neurocognitive dimensions compared to controls, and this could not be accounted for by premorbid or current general cognitive ability, current depression, medications, alcohol or cannabis abuse. ESs in comparison to HCs for Declarative Memory and Attention/WM were large for CHR+C participants. Compared to CHR-NC, CHR+C participants were significantly impaired in Attention/WM and Declarative Memory, the latter significantly predicting conversion to psychosis and time to event in concert with positive symptoms. Comparable impairments were observed in never-medicated and currently unmedicated CHR-NCs and CHR+C’s. These data demonstrate the sensitivity of neurocognitive function as a component risk marker for psychosis.Our findings support theoretical models hypothesizing Attention/WM impairments, and even more strongly, impaired Declarative Memory, as central to the CHR stage.The results are consistent with NAPLS-1, in which Declarative Memory had the largest ES decrement and roughly the same magnitude in CHR+C.The distinct profile of performance across domains, especially in CHR+C, suggests that at the incipient psychotic phase, specific forms of neurocognition are affected and are predictive of later psychosis. Within CHR participants, there was considerable variability in neurocognitive performance. CHR-NC’s impairments , were on the order of other psychiatric disorders in young people, such as attention-deficit/hyperactivity disorder. CHR+Cs impairments were approximately 57% larger, although smaller than those observed in first episode schizophrenia . Analyses of individual variability and longitudinal analyses are needed to identify how profile and severity differ according to comorbid disorders, final diagnoses and pre- and post-conversion. A key question was how neurocognitive deficits are associated with medication status. Psychotropic-naive and unmedicated subgroups had significant impairments comparable to the overall CHR subgroups. Treated groups, including with antipsychotic medications, were largely comparable to those without treatment, except they had somewhat greater Attention/WM impairment.

These observations emphasize the essential nature of neurocognitive impairment in the CHR stage and de-emphasize the role of medications as confounders in our results. Our design precludes conclusions about causality and future work should study the effect of medications on neurocognition in CHR populations in a prospective design. There were a number of other potentially important observations. The unexpectedly higher Verbal score that was retained in logistic and Cox regressions in concert with impaired Declarative Memory was not a significant predictor in univariate comparisons. This pattern of high verbal premorbid ability and impaired memory, coupled with P1/P2 composite appears to be a pernicious combination predicting conversion and needs replication. Importantly, the BVMT-R showed comparably large impairments as the two verbal memory tasks, highlighting that Declarative Memory deficits in CHR are not solely verbal, and that Declarative Memory impairments are key neurocognitive risk markers . Neurocognitive tests used in concert with other clinical and psychobiological measures may enhance prediction of psychosis or functional outcome. For example, in analyses limited to two tests selected from literature review14 prior to these neuropsychological analyses, NAPLS-2 investigators found that the HVLT-R and BACS Symbol Coding added modest but significant independent predictive power above the clinical measures in a risk calculator algorithm for psychosis conversion and this was replicated in an independent non-NAPLS sample. Similar results have been observed in other studies. In this study, we showed that other tests, including BVMT-R, PAM, and ACPT QA Vigil added significant independent variance beyond P1-P2 symptoms, augmenting the importance of neurocognitive markers. NAPLS-2, because of its large sample from diverse geographical areas, extensive neurocognitive coverage, remarkably complete neurocognitive dataset, and large never medicated sample, allowed for a strong confirmation of neurocognitive hypotheses. The NAPLS-2 study built upon and improved the NAPLS-1 assessment, confirming and expanding prior results . This broad range of measures expanded the scope of what is known about CHR neurocognition. Limitations include the fact that most of these tests and factors are complex. Thus, while Declarative Memory is clearly affected, the tasks tapping this domain cannot parse the specific mechanisms underlying the deficits.

Further research with more molecular measures of cognition, such as those developed by CENTRACS, may allow specification of the cognitive processes underlying the deficits. We did not randomize or counterbalance the order of tests, so we cannot rule out order effects. However, the most impaired tasks were spread out across the battery from the sixth to the last tests in the battery so there is no obvious fatigue effect. Adolescence is a developmental period between childhood and adulthood characterized by marked physiological, psychological, and behavioral changes. Adolescents experience rapid physical growth, sexual maturation,rolling grow benches and advances in cognitive and emotional processing . These changes coincide with increases in substance use, with alcohol being the most widely used illegal substance among adolescents . National survey data indicate that 33% of 8th grade students have tried alcohol, and this percentage increases to 70% among 12th graders . Of greater concern is the increase in heavy episodic drinking where prevalence rates increase from 6% to 22% for 8th and 12th grades, respectively , as heavy episodic drinking during adolescence is associated with numerous negative effects on adolescent health and well being, including risky sexual behaviors , hazardous driving , and alterations in adolescent brain development . During adolescence, the brain undergoes significant changes, and a recent longitudinal neuroimaging study suggests that heavy episodic drinking during this developmental period alters brain functioning . Squeglia and colleagues examined the effects of heavy episodic drinking on brain function during a visual working memory task, comparing brain activity in adolescents at baseline and again at follow-up to compare brain activity in those who transitioned into heavy drinking during adolescence to demographically matched adolescents who remained nondrinkers. Adolescents who initiated heavy drinking exhibited increasing brain activity in frontal and parietal brain regions during a visual working memory task compared to adolescents who remained nondrinkers through follow-up, who showed decreasing frontal activation, consistent with studies in typical development . Thus, adolescent heavy episodic drinking may alter brain functioning involved in working memory; however, additional longitudinal studies are needed to explore the effects of alcohol on neural correlates of other vital cognitive processes, such as response inhibition.

Response inhibition refers to the ability to withhold a prepotent response in order to select a more appropriate, goal-directed response . The neural circuitry underlying response inhibition develops during adolescence , and as such, brain response during inhibition changes during adolescence . Briefly, cross-sectional research indicates that brain activation during response inhibition transitions from diffuse prefrontal and parietal activation to localized prefrontal activation . Longitudinal studies report that atypical brain responses during response inhibition, despite comparable performance, is predictive of later alcohol use , substance use and dependence symptoms , and alcohol-related consequences . Together, these findings indicate that neural substrates associated with response inhibition change over time and abnormalities in development may contribute to later substance use. To this end, the current longitudinal fMRI study examined the effects of initiating heavy drinking during adolescence on brain activity during response inhibition. We examined blood oxygen level dependent response during a go/no-go response inhibition task prior to alcohol initiation , then again on the same scanner approximately 3 years later, after some adolescents had transitioned into heavy drinking. Based on our previous findings , we hypothesized that adolescents who transition into heavy drinking would show reduced BOLD response during response inhibition prior to initiating heavy drinking followed by increased activation after the onset of heavy episodic drinking, as compared to adolescents who remained non-users. By identifying potential neurobiological antecedents and consequences of heavy episodic drinking, this study will extend previous research on the effects of alcohol on brain function and point to risk factors for heavy episodic drinking during adolescence.Table 1 provides baseline and follow-up descriptive information for Heavy Drinkers and Controls. Of note, the ability to inhibit prepotent responses improved with age with no differences in this improvement between groups . The no-go versus go contrast at baseline revealed activations consistent with meta-analyses of response inhibition showing significant clusters of activation in inferior, superior, and medial frontal gyri, and in parietal, temporal, cerebellar, and subcortical areas . Because Heavy Drinkers reported significantly more substance use than Controls at followup, a lifetime substance use composite and biological sex were included as covariates. A repeated measures ANCOVA revealed significant group × time interactions in 5 regions: the bilateral middle frontal gyri, right inferior parietal lobule, left putamen, and left cerebellar tonsil . At baseline, Heavy Drinkers showed significantly less no-go BOLD contrast than Controls in all 5 clusters . Across adolescence, Heavy Drinkers exhibited increasing response inhibition BOLD contrast, and Controls showed attenuated response in clusters. At follow-up, Heavy Drinkers showed significantly greater response inhibition activity than Controls in 4 brain regions : bilateral middle frontal gyri, right inferior parietal lobule, and left cerebellar tonsil. Exploratory post-hoc analyses examined whether BOLD response contrast change over time correlated with subsequent alcohol involvement in Heavy Drinkers . BOLD response contrast during no-go relative to go trials over time in the right middle frontal gyrus positively correlated with follow-up lifetime number of drinks . Follow-up hierarchical linear regressions revealed BOLD response contrast at baseline did not predict follow-up alcohol consumption after controlling for baseline alcohol, biological sex, and follow-up age at our conservative, corrected threshold . The present longitudinal neuroimaging study examined the effects of initiating heavy drinking during adolescence on brain responses during response inhibition. We hypothesized, based on previous findings , that adolescents who transition into heavy drinking would show reduced BOLD response during response inhibition prior to initiating heavy drinking followed by increased activation after the onset of heavy episodic drinking, as compared to adolescents who remained non-drinkers. Examining a longitudinal neuroimaging sample of youth both preand post-alcohol use initiation allowed us to address the etiology of neural pattern differences. Although group × time effect sizes were small, our findings suggest that differential neural activity patterns predate alcohol initiation and also arise as a consequence of heavy drinking.

Recent studies have shown that GlyRs are an important target for cannabinoids in the central nervous system

These data thus suggest that AEA augments ventricular myocardial IKATP through a CB2 receptordependent pathway, which may underlie the antiarrhythmic and cardioprotective action of AEA; by contrast, AEA exerts no effect on another inward rectifier current IK1 in ventricular myocytes. Nevertheless, whether AEA causes an inhibitory effect on myocardial KATP channels in excised membrane patches remains to be determined. Endocannabinoid production can be induced when the cardiovascular system is functioning under deleterious conditions such as circulatory shock or hypertension; endocannabinoids are also involved in preconditioning by nitric oxide. Activation by endogenously released AEA under pathophysiological conditions may contribute to the cardioprotection afforded by sarcolemmal KATP channels. The difference in KATP channel responses between different cell types to endocannabinoids may be partly attributed to the distinct, tissue-specific molecular compositions of KATP channels or the cellular background in which the channels are expressed.Across the studies discussed above, the time course for endocannabinoid lipids and analogues to induce the potassium channel-modulating effects is generally slow, with a maximal response achieved after several minutes of continuous drug exposure. Moreover, there is no significant washout of the endocannabinoid effect upon perfusion with drug-free solution, unless the wash solution contains lipid-free BSA . In almost all of these studies, the experimentation was performed at room temperature, except the study by Gantz and Bean, where the experimentation was conducted at 37°C instead. Cannabinoids are lipid-soluble compounds, and dimethyl sulfoxide or 100% ethanol was chosen as a solvent to prepare aliquots of endocannabinoids at millimolar concentrations in these studies,cannabis grow systems with the final concentration of solvent during experiments consistently ≤ 0.1–- 0.15%. Gantz and Bean showed that the maximal inhibitory effect of 2-AG on the fast inactivating A-type K+ current IA could be measured within 1 min of drug exposure .

The more prompt response to endocannabinoids observed by Gantz and Bean may be attributed, in part, to the higher temperature at which their study was carried out.AEA can modulate the functions of ion channels other than potassium channels, such as TRP vanilloid type 1 channels, 5-HT3 receptors, nicotinic acetylcholine receptors, glycine receptors, and CaV and voltage-gated Na+ channels, in a manner independent of known cannabinoid receptors. Several studies are reviewed below to exemplify that, besides potassium channels, multiple ion channel types belonging to other ion channel families can also serve as molecular targets of endocannabinoids, which collectively manifests the relevance of direct modulation of various ion channels in mediating the biological functions of endocannabinoids.The TRP channel superfamily of non-selective, ligand-gated cation channels is involved in numerous physiological functions such as thermo- and osmosensation, smell, taste, vision, hearing pressure or pain perception. The endocannabinoid AEA is structurally related to capsaicin , the agonist for TRPV1 channels. It has been demonstrated by Zygmunt et al. that AEA induces vasodilation in isolated arteries in a capsaicin-sensitive manner and that the AEA effect is accompanied by release of calcitoningene-related peptide , a vasodilator peptide. This vasodilatory action of AEA is abolished by a CGRP receptor antagonist but not by the CB1 receptor antagonist SR141716A; moreover, CB1 and CB2 receptor agonists do not reproduce vasodilation caused by AEA. Additionally, AEA concentration-dependently elicits capsazepine -sensitive currents acquired from cells over expressing cloned TRPV1 channels in both whole-cell and excised patch modes. These findings thus suggest that AEA induces peripheral vasodilation by activating TRPV1 channels on perivascular sensory nerves independently of CB1 receptors and consequently causing the release of CGRP. AEA and other structurally related lipids may act as endogenous TRPV channel agonists or modulators to regulate various functions of primary sensory neurons such as nociception, vasodilation, and neurogenic inflammation.Low voltage-activated or T-type calcium channels, encoded by the CaV3 gene family, regulate the excitability of many cells, including neurons involved in nociceptive processing, sleep regulation and the pathogenesis of epilepsy; they also contribute to pacemaker activities.

The whole-cell currents of both cloned and native T-type calcium channels are blocked by sub-micromolar concentrations of AEA; this effect is prevented by inhibition of AEA membrane transport with AM404, suggesting that AEA acts intracellularly. AEA concentration-dependently accelerates inactivation kinetics of T-type calcium currents, which accounts for the reduction in channel activity. The inhibitory action of AEA on these CaV channels is independent of CB1/CB2 receptors and G proteins, and the inhibition is preserved in the excised inside-out patch configuration, implying a direct effect; furthermore, AEA has little effect on membrane capacitance, reflecting that its effects are unlikely attributed to simple membrane-disrupting mechanisms. Accordingly, it is postulated that AEA may directly target and modulate T-type calcium channels to elicit some of its pharmacological and physiological effects. High voltage-activated, dihydropyridine sensitive L-type calcium channels are involved in excitation-contraction coupling in skeletal, smooth, and cardiac myocytes as well as the release of neurotransmitters and hormones from neurons and endocrine cells. It has been demonstrated via biochemical assays that AEA is able to displace specific binding of L-type calcium channel antagonists to rabbit skeletal muscle membranes in a concentration-dependent manner, with the IC50 around 4–30 μM, supporting a direct interaction between AEA and L-type calcium channels. Furthermore, AEA suppresses the whole-cell currents of both native NaV and L-type calcium channels in rat ventricular myocytes in a voltage- and pertussis toxin-independent manner, indicating that the inhibitory effect of AEA does not require activation of Gi/o protein-coupled receptors like CB1 and CB2 receptors. Direct inhibition of NaV and L-type CaV channel function may account for some of the negative inotropic and antiarrhythmic effects of AEA in ventricular myocytes.GlyRs belong to the Cys-loop, ligand-gated ion channel superfamily that comprises both cationic receptors such as nAChRs and 5-HT3Rs and anionic receptors such as γ-aminobutyric acid type A receptors and GlyRs. GlyRs are distributed in brain regions involved in pain transmission and reward, and they are thought to play a role in the analgesia process and drug addiction. AEA, at pharmacologically relevant concentrations , directly potentiates the function of recombinant GlyRs expressed in oocytes and native GlyRs present in acutely isolated rat ventral tegmental area neurons through an allosteric, CB1 receptor-independent mechanism.

The stimulatory effect of AEA on GlyRs is selective, as neither the GABA-activated current in VTA neurons nor the recombinant α2β3γ2 GABAAR current in oocytes is affected by AEA treatment.The homomeric α7 receptor is one of the most abundant nAChRs in the nervous system and it is involved in pain transmission, neurodegenerative diseases, and drug abuse. The endocannabinoid AEA has been shown to inhibit nicotine-induced currents in Xenopus oocytes expressing cloned α7 nAChRs; the inhibition is concentration-dependent with an IC50 of 229.7 nM and noncompetitive. In addition, pharmacological approaches using specific inhibitors uncovered that the inhibitory effect of AEA on α7 nAChRs does not require CB receptor activation, G protein signaling, AEA metabolism, or AEA membrane transport, suggesting that AEA inhibits the function of neuronal α7 nAChRs expressed in Xenopus oocytes via direct interactions with the channel. AEA is structurally similar to other fatty acids such as arachidonic acid and prostaglandins; it is possible that AEA and other fatty acids that are capable of modulating nAChRs share some common mechanisms of action to control the channel function.It is well established that potassium channels are important players in controlling the duration, frequency, and shape of action potentials, thereby controlling cell excitability. As described above, the endocannabinoid AEA is capable of exerting CB1/CB2 receptor-independent functional modulation of a variety of potassium channels, including native BK , Ito, delayed rectifier, KATP and TASK-1 channels, as well as cloned TASK-1, Kv4.3, Kv3.1, Kv1.2 ,and Kv1.5 channels . Moreover, native neuronal IA, pancreatic β–cell delayed rectifier and KATP, and atrial myocardial delayed rectifier potassium channels are subject to modulation by another endocannabinoid 2-AG, also in a CB receptor-independent manner . Likewise, for TRPV1 channels, ligand-gated ion channels such as cloned and native GlyRs, cloned α7 nAChRs and native 5-HT3Rs , plus voltage-gated ion channels such as native NaV , native L-type CaV, and native and cloned T-type CaV channels,ebb and flow tables the functional modulation elicited by AEA does not require activation of CB receptors . Interestingly, in the majority of studies reviewed in this article, the CB receptor-independent modulatory effects of AEA are induced only when endocannabinoids are introduced extracellularly to the ion channel targets that include cloned Kv1.2, hKv1.5, Kv3.1, and hKv4.3 channels heterologously expressed, and native delayed rectifier Kv channels in aortic vascular smooth muscle cells and cortical astrocytes, whereas in several reports endocannabinoids only alter ion channel function when administrated at the cytoplasmic side of the membrane. These observations imply the presence of distinct interaction sites or mechanisms of action, which may be attributable to differences in the types of ion channels or endocannabinoids investigated, cell models/cellular environments channels being exposed to, or experimental protocols adopted. On the other hand, although membrane environment seems to be critical for the regulation of signal transduction pathways triggered by G protein-coupled receptors like CB1 receptors, current evidence does not support an involvement of changing membrane fluidity or altering lipid bilayer properties in mediating the CB receptor-independent actions of AEA on ion channels.

Besides, it is worth noting that, unlike 2-AG, which is entirely localized in lipid rafts in dorsal root ganglion cells, most of AEA is found in non-lipid raft fractions of the membrane. It is therefore less likely that changes in membrane fluidity serve as a primary mechanism of action responsible for AEA’s CB receptor-independent effects. Lipid signals like endocannabinoids and structurally related fatty acids may modify gating of voltage-gated ion channels through a direct action on the channel via a membrane lipid interaction. A model for direct interactions between ion channel proteins and endocannabinoids is further supported by identification of specific residues in several channel proteins crucial for the CB receptor-independent modulatory actions exerted by endocannabinoids. For example, AEA may directly interact with, and in turn be stabilized by, a ring of hydrophobic residues formed by valine 505 and isoleucine 508 in the S6 domain around the ion conduction path of the hKv1.5 channel, thereby plugging the intracellular channel vestibule as a high potency open-channel blocker and suppressing the channel function. Molecular dynamic simulations have also helped reveal novel interactions between AEA and the TRPV1 channel on a molecular level, suggesting that AEA enters and interacts with TRPV1 in a location between the S1-S4 domains of the channel via the lipid bilayer.Brief interventions have empirical support for acutely reducing alcohol use among non-treatment seeking heavy drinkers. For example, randomized clinical trials of brief interventions have found favorable results among heavy drinkers reached through primary care , trauma centers and emergency departments . Brief interventions also have shown effectiveness in reducing alcohol use in non-medical settings among a young adult college population . Given this sizable evidence base, there is considerable interest in understanding the underlying mechanisms toward optimizing this approach. Neuroimaging techniques allow for the examination of the neurobiological effects underlying behavioral interventions, probing brain systems putatively involved in clinical response to treatment. To date, one study has examined the effect of a motivational interviewing-based intervention on the neural substrates of alcohol reward . In this study, neural response to alcohol cues was evaluated while individuals were exposed to change talk and counter change talk , which are thought to underlie motivation changes during psychosocial intervention. The authors report activation in reward processing areas following counter change talk, which was not present following exposure to change talk . Feldstein Ewing and colleagues have also probed the nature of the origin of change talk in order to better understand the neural underpinnings of change language . In this study, binge drinkers were presented with self-generated and experimenter-selected change and sustain talk. Self-generated change talk and sustain talk resulted in greater activation in regions associated with introspection, including the interior frontal gyrus and insula, compared to experimenter elicited client language . These studies employed an active ingredient of MI within the structure of the fMRI task, thus allowing for a more proximal test of treatment effects. Neuroimaging has also been used to explore the effect of psychological interventions on changes in brain activation that are specifically focused on alcohol motivation. For example, cue-exposure extinction training, a treatment designed to prevent return to use by decreasing conditioned responses to alcohol cue stimuli through repeated exposure to cues without paired reward, has also been evaluated using neuroimaging .

Examining the variability in recognition over time within this study is still meaningful

In summary, considering the HIV literature, the middle-aging literature, and the finding that episodic memory was associated with prefrontal structures rather than medial temporal lobe structures, episodic memory in middle-aged PWH is more likely related to frontally mediated etiologies. This could indicate that memory in middle-aged PWH is associated with HIV disease. Notably, this association was seen in PWH on ART without a detectable viral load, showing that this association is seen even in PWH who are virally suppressed. However, it is of course difficult to differentiate between the effect of HIV itself versus the effect of comorbid conditions, many of which may be increased in PWH due to the downstream effects of HIV and ART, or a combination of the two. The medial temporal lobe was not associated with episodic memory, which overall may indicate that at this age range, preclinical AD is not likely a contributor to memory functioning. However, the middle-aging literature does not provide a good estimate of when, on average, to expect to start detecting differences, even small, in memory and medial temporal structures in those that are on an AD trajectory; therefore, it is possible that this group is too young to even start detecting any preclinical AD effect. This is further complicated because the middle-aging literature is demographically different from the CHARTER sample, thus highlighting the need for more diverse aging studies. Additionally, this study did not specifically examine differences in the associations between memory and brain structures by AD risk ; thus, future research should examine memory associations by AD risk, particularly given that APOE status was associated with delayed recall. Relatedly, these findings show that on average this group is not showing associations with memory and the medial temporal lobe and early signs of preclinical AD, but this does not mean that no participants are on an AD trajectory. In fact, given base rates, some of this group will eventually develop AD. First, the multi-level models examining the cross-level interactions between time and medial temporal structures with dichotomous recognition as the outcome did not converge. This analysis would have examined if baseline medial temporal lobe structures are associated with greater likelihood of impaired recognition over time. Given that the models did not converge, this indicates the models were over parameterized and that the model was not supported by the data.

This was possibly affected by the modest sample size,greenhouse bench top with a particularly small group of participants with impaired recognition at baseline. For example, of the 12 participants that were impaired at baseline, only two remained impaired. Moreover, in those that were not impaired at baseline but were impaired at some point in time, most reverted back to unimpaired at subsequent visits. Only four participants remained impaired in recognition over time, although with limited follow-up. There is not data on why these participants do not have additional follow-up , and thus it is hard to make any definitive conclusion as to if consistently impaired recognition is a risk factor for negative outcomes. However, it would certainly be warranted to examine if consistent recognition impairment is associated with negative outcomes in a larger group of middle-aged and older PWH. For example, this small group of participants that were consistently impaired in recognition memory could represent those that are progressively declining and are on more of an AD trajectory. Moreover, a better understanding of how those that are consistently impaired differ from those that revert to unimpaired recognition would be beneficial. There are multiple reasons that may explain why recognition impairment status was variable over time. First, HIV-associated neurocognitive impairments are known to fluctuate over time. For example, in the CHARTER study, 17% of the sample improved over time . Therefore, this could simply reflect the heterogeneous and fluctuating course of HAND over time. Second, recognition is sometimes used as an embedded performance validity measure. While all participants were administered a standalone performance validity test at the beginning of the neuropsychological evaluation to verify credible test performance, effort can fluctuate throughout testing. That said, none of the participants at baseline were below the proposed cut-off of ≤5 for HVLT-R recognition , making this explanation less likely. Lastly, this variability over time may be in part due to the psychometric properties of the HVLT-R and the BVMT-R.

Recognition for both the BVMT-R and the HVLT-R are skewed with known ceiling effects, meaning that there is limited variability in this variable . Therefore, a one- or two-point difference can result in large differences in the normative score. Moreover, there are known modest interform differences on the HVLT-R recognition . Additionally, while the HVLT-R and BVMT-R test-retest reliability of recognition show adequate test-retest stability coefficients, the test-retest reliability of recognition is less reliable than other test measures such as total learning or delayed recall . Next, longitudinal delayed recall was examined. Most notably, there was little decline in delayed recall over time; the delayed recall T-score decreased by 0.041 per year. Additionally, there was little variability in this slope given that the standard deviation of the slope was 0.678. None of the cross-level interactions between medial temporal lobe structures and years since baseline were significant indicating that medial temporal lobe structures at baseline were not associated with a change in delayed recall. However, given that there was little variability in delayed recall over time, this was not surprising. As discussed in the introduction, worse baseline medial temporal lobe structures, particularly the hippocampus and entorhinal cortex, have been associated with an increased risk of future AD, MCI, and decline in cognition in older adults without HIV . This relationship is less understood in middle age. One study by Gorbach et al. found that hippocampal atrophy was associated with a decline in episodic memory in adults over the age of 65 but not in middle-aged adults between the ages of 55 to 60. As highlighted above, it is possible that the cohort from the current study is too young to expect to see associations between medial temporal lobe structures and longitudinal memory. Importantly, the current study only examined cross-sectional structural MRI; therefore, we cannot assume that smaller or thinner medial temporal lobe structures are indicative of atrophy. Additionally, this study does not have an HIV-negative comparison group and did not use normatively-adjusted morphometric values , so it is unclear if participants in this cohort deviate from average, although accelerated brain atrophy has been demonstrated in PWH previously .

Therefore, research examining changes in the medial temporal lobe and how that change relates to episodic memory, particularly recognition memory, in persons with and without HIV over the age of 65 is needed. This research may help to better understand if medial temporal lobe structures are associated with the risk of an AD trajectory and if these associations differ by HIV-serostatus. While there may be some individuals in this group that are experiencing objective decline, on average, in this group of middle-aged PWH we did not observe a decline in delayed recall T-scores over time. These T-scores are age-corrected, so the raw scores on the tests may be declining but they are not declining at a rate greater than what would be expected for age. Additionally, these T-scores also account for practice effects, which if unaccounted for can mask decline,cannabis dry rack although the best method of practice-effect correction is still debated . Similar results showing stable cognition over time were found in a study by Saloner et al. in a larger sample of CHARTER participants aged 50 and over. This study employed growth mixture modeling, and none of the three latent classes demonstrated a decline in global T-score over time. However, other studies of PWH over the age of 50 have observed a greater than expected effect of aging on episodic memory and a recent systematic review found accelerated neurocognitive aging in 75% of longitudinal studies in PWH . Some researchers have questioned if accelerated aging could be due to a neurodegenerative cause such as AD given the high prevalence of risk factors for AD in PWH such as chronic inflammation, increased cardiometabolic comorbidities, and lower brain reserve . While emerging studies have demonstrated some possible ways to disentangle HAND and aMCI , it remains unclear if PWH are at increased risk of AD or if a neurodegenerative etiology could, at least in part, account for some of the observed accelerated aging. For example, Milanini et al. showed a low frequency of amyloid positivity, measured via PET imaging, among virally suppressed PWH over the age of 60, and the rates of amyloid positivity were similar to published rates among an age-matched seronegative sample. However, a recent study among Medicare enrollees did find a higher prevalence of AD and related disorders among PWH . In summary, this aim showed that recognition was variable over time. While amnestic decline could not specifically be tested given that recognition models did not converge, these analyses indicated that within this group, medial temporal lobe integrity was not associated with a decline in delayed recall over time. Additionally, delayed recall only marginally declined over time , thus adding to the mixed literature examining episodic memory in middle-aged and older PWH.

Overall, this study did not detect clear signs of preclinical AD in this group, as delayed recall did not change over time and baseline measures of medial temporal lobe integrity were not associated with memory over time as seen in HIV-negative older adults. However, it is not clear if these associations would be expected in a middle-aged cohort of PWH due to a lack of literature on this topic in middle-aged adults. Therefore, it would be beneficial to re-examine this analysis in an older cohort of PWH.The last aim of this study was to examine if the medial temporal lobe mediates a relationship between peripheral inflammation and memory. It was hypothesized that medial temporal lobe structures would mediate a relationship between peripheral inflammation and episodic memory. Five peripheral biomarkers of inflammation were examined , and these biomarkers were chosen given that they have been associated with cognition in AD and HIV. In this mediation model, the association between peripheral biomarkers of inflammation and medial temporal lobe structures was also explored and the relationship between medial temporal lobe structures and memory was also reported, although this second relationship was already explored in aim 1. First, the mediation models examining recognition indicated poor model fit. Therefore, the relationship between the five plasma biomarkers of inflammation and recognition was examined instead. Greater levels of plasma CRP were associated with lower odds of having impaired recognition. None of the other plasma biomarkers of inflammation were associated with recognition impairment. These findings are generally not in line with the HAND , middle-aging , or older adult literature . Aging and HIV studies have found that a greater concentration of these plasma biomarkers of inflammation are associated with greater risk of HAND, worse memory, and an increased risk of future development of MCI or AD. However, many of these studies only find weak associations, and these studies do not examine recognition memory. The current study had a very small sample of PWH with impaired recognition; thus, it is possible that the CRP finding is spurious, and this finding should not be over-interpreted. Therefore, analyses should be reexamined in a larger, more generalizable sample. Next, a single-mediator model was used to examine if medial temporal lobe structures mediate the relationship between plasma biomarkers of inflammation and delayed recall. In the entire sample, none of the plasma biomarkers of inflammation were significantly associated with any of the medial temporal lobe structures, there were no significant direct effects between the plasma biomarkers of inflammation and delayed recall, and no mediated effect was established. As stated above, the lack of association between inflammation and delayed recall is a little surprising given that the association between inflammation and worse cognition has been demonstrated in HAND and the aging literature. Although, the effect sizes are often small, and the middle-aging literature is limited. Additionally, some of the peripheral inflammatory markers examined in this study have been associated with medial temporal lobe integrity and function in older adults , but the association between inflammation and the medial temporal lobe is much less studied in mid-life and PWH.

These inflammatory markers were not significantly associated with change in any other cognitive domain

To clarify, in the HIV literature, “older” PWH usually refers to PWH aged 50 and over; however, in the aging literature, “older” usually refers to people aged 65 and older, and “middle-age” refers to people aged 45 to 64. To rectify this discrepancy in terminology, the aging literature terminology will be used when discussing both the HIV literature and the aging literature.HAND remains prevalent in the ART era . While the pathogenesis of HAND is not entirely clear, HAND is thought to be the result of the neurotoxic cascade initiated by HIV . The majority of neurocognitive deficits associated with HAND are in the mild range and do not significantly impact everyday functioning , and executive functioning, learning, and memory deficits are most common . Importantly, longitudinal studies have shown that HAND is usually non-progressive . AD is a neurodegenerative disease associated with progressive cognitive and functional impairment . AD is the most common cause of dementia, and it affects 10% of persons without HIV over the age of 65 and 17% between the ages of 75-84 . AD is characterized by the accumulation amyloid plaques and tau tangles in the brain, that start in the medial temporal lobe and result in initial atrophy of the medial temporal lobe and later more widespread atrophy . These brain changes start years to decades before clinical symptoms appear . On neuropsychological testing, AD typically presents initially with impairment in memory, which progresses to global impairment and loss of independent functioning . Mild cognitive impairment is defined as the transitional stage between cognitively normal and major neurocognitive impairment in which persons have observable cognitive deficits but these deficits are not yet significantly impacting everyday functioning. MCI can be further divided into amnestic and non-amnestic MCI sub-types, with aMCI being more associated with AD . While participants are often dichotomized as “MCI” or “cognitively unimpaired,” cognitive decline associated with AD is insidious; therefore, even milder deficits in memory in participants classified as cognitively unimpaired are associated with underlying AD pathology such as amyloid accumulation or medial temporal lobe atrophy . Due to the overlap in cognitive presentation ,vertical grow racks middle-aged and older PWH are at risk of erroneously being classified as HAND, due to HIV diagnosis, when they may instead be on an AD trajectory.

Given that aMCI is associated with progressive cognitive and functional impairment, as opposed to HAND, which is more stable, it is imperative that the etiology of the cognitive impairment is correctly identified . While there is currently no cure for AD, a misdiagnosis of HAND when a person with HIV has aMCI limits the opportunity for early intervention when interventions may be most beneficial . For example, early identification of AD allows more time for life planning and the acquisition of compensation strategies, which may prolong independent functioning, and, by extension, sustain quality of life . Furthermore, accurate diagnosis is important to allay concerns in PWH without indication of an AD trajectory. It is hypothesized that PWH may be at increased risk of AD due to the compounding effects of HIV and aging on the brain , chronic inflammation despite viral suppression, increased prevalence of vascular and metabolic risk factors , and potentially common pathophysiological pathways . While little work has been done in this space, several recent case reports on AD in PWH have highlighted the risk of delayed diagnosis, detailed complications determining the etiology of cognitive impairment, and underscored the clinical need for tools to differentiate HAND and aMCI . Additionally, there is some evidence from the HIV and aging literature to suggest that memory may be particularly affected in older PWH; however, most of these studies do not consider other etiologies that may be contributing to the observed findings. For example, Goodkin et al. found that there was a greater than expected effect of aging on episodic memory in PWH aged 50 and over, and Seider et al. found that verbal memory declines more rapidly with age in PWH as compared to HIV-negative comparison participants. Moreover, in a recent study using latent class analysis to examine a group of PWH aged 50 and over, we found that three classes emerged: a multidomain impaired group, a learning and memory impaired group, and a cognitively unimpaired group . Due to the medial temporal lobe involvement in aMCI, the cognitive profile is described as “amnestic”, with encoding, storage, and rapid forgetting deficits observed as poor learning, recall, and recognition on memory tests . Conversely, HIV particularly impacts fronto-striatal systems , and the frontostriatal involvement associated with HAND accounts for a “subcortical” cognitive presentation. Thus, memory deficits in HAND are characterized by relatively normal memory storage and retention but impaired encoding and retrieval resulting in poor learning and delayed recall, but intact recognition .

This “subcortical” presentation in HAND has been observed even as PWH age . Therefore, recognition may be more indicative of aMCI than HAND and a useful tool for differential diagnosis . However, because recognition has historically been spared in HAND and only recently have PWH been reaching the ages at which they may develop aMCI/AD, there is little research examining recognition deficits in the context of HIV. Of note, deficits in other domains are unlikely to aid in differential diagnosis without further research. For example, while aMCI is characterized by memory deficits, other deficits, such as executive dysfunction, are also quite common in aMCI and AD . Therefore, the presence of executive functioning deficits, which are common in HAND, could be indicative of HAND, aMCI, or a mixed HAND and aMCI profile. Moreover, biomarkers may aid in differential diagnosis in the future; however, elevated amyloid beta is observed in HIV , so more research is needed in order for biomarkers to be beneficial in differential diagnosis of HAND and aMCI. ur research group at the HIV Neurobehavioral Research Program has begun to examine neuropsychological methods to identify aMCI among PWH using adapted Jak/Bondi MCI criteria. Jak/Bondi MCI criteria is an empirically based MCI criteria that has been shown to have greater associations with AD biomarkers and identify more participants who progress to dementia than traditional MCI diagnostic approaches . Our group utilized the basis of the Jak/Bondi criteria and adapted it to capitalize on the neuropsychological differences between HAND and aMCI . Thus, aMCI was defined as impairment on at least two memory tests with the adaptation that at least one impaired test be a test of recognition. In a sample of 80 PWH from the National NeuroAIDS Tissue Consortium with neuropathologically characterized Aβ42 and neuropsychological testing within a year of death, 40 participants met the adapted criteria for aMCI. Twenty-nine of the participants with aMCI were also classified with HAND. The aMCI group was 3.5 times more likely to have the presence of Aβ42 plaques. Conversely, when the same sample was split into HAND and no HAND groups, the presence of Aβ42 plaques was not significantly associated with the HAND group. In sum, these findings provide preliminary data to further support that aMCI may go undetected in a large proportion of PWH with HAND, and these PWH may be misclassified or have a mixed HAND and aMCI profile. Secondly, these preliminary analyses also suggest that recognition deficits in older PWH are sensitive to AD pathology .

Magnetic resonance imaging has shed light on brain changes associated with aMCI and AD and is increasingly used in clinical assessment of suspected AD . Medial temporal lobe atrophy is a core feature of aMCI/AD and has been shown to correlate with disease progression and predict progression from cognitively normal to aMCI . However, AD is also associated with more widespread cortical and subcortical atrophy and white matter abnormalities, particularly as the disease progresses . While neuroimaging has been used extensively to study aging and AD, mostof these neuroimaging studies exclude PWH . Consequently, it is unclear if aging/AD research is generalizable to older PWH. HIV has historically been associated with early changes to fronto-striatal circuits ,indoor grow light shelves although recent neuroimaging studies also report cortical atrophy . Similarly, HAND has been associated with fronto-striatal circuits, and, in more recent years, has also been associated with more cortical structures . Neuroimaging studies have examined neuroanatomical correlates of delayed recall as well as the effect of age on the brain within the context of HIV . Studies comparing PWH with HAND and HIV-negative participants with MCI or AD have shown that hippocampal volumes were able to discern HAND and MCI/AD . Additionally, within the context of HIV, decline in memory has been associated with hippocampal atrophy . However, there are notable limitations to the current literature. For example, most studies have been couched in the context of HAND, are not aimed at examining aMCI within the context of HIV, nor do they consider other etiologies . Additionally, several neuroimaging studies examining the effect of aging in PWH have samples with mean ages in the late 30s or early 40s, which is likely before the initiation of AD pathology . Moreover, memory recognition, which could improve differentiation of HAND and aMCI, was not examined in these studies. Both HIV and aMCI are associated with chronic, low-grade inflammation . As such, inflammation may be one biological mechanism that puts PWH at greater risk of aMCI. Peripheral inflammatory markers can cross the blood-brain barrier, and there is mounting evidence to support the hypothesis that chronic inflammation exacerbates both Ab42 and p-tau pathology and plays a role in the pathogenesis of AD . There is ample evidence linking increased inflammation to brain atrophy, cognition, and cognitive decline in late life , with emerging evidence that this link is present even in midlife . Chronic inflammation is also present in PWH despite viral suppression , and is hypothesized to contribute to and exacerbate HAND . Due to this overlap, inflammation may be one factor that also puts PWH at greater risk of aMCI/AD. While the literature has highlighted the need to investigate this association, little research currently exists . Determining how inflammation impacts brain integrity and cognition in middle-aged PWH could have great implications for our overall understanding of the role of inflammation in AD and for the development of early intervention strategies to lower the risk of AD within PWH. I have begun to examine the relationship between inflammation and change in memory. These preliminary analyses included 57 PWH aged 50 and older with peripheral inflammatory markers and neuropsychological testing at baseline and at 1-year follow-up. Overall, I found that baseline concentrations of inflammatory biomarkers were not associated with baseline memory performance. However, using multi-variable linear regressions, IL-6 and TNF-a were associated with decline in delayed recall, and greater baseline concentrations of CCL2 were associated with decline in recognition .Overall, these findings support the hypothesis that inflammatory markers may be related to cognitive changes associated with abnormal memory decline . As AD drug trials targeting amyloid continue to fail, there is increased focus on repositioning current drugs, such as anti-inflammatory drugs, to reduce the risk of AD . Epidemiological studies have shown that persons taking antiinflammatory drugs for diseases such as rheumatoid arthritis had a reduced risk of developing AD . Moreover, small, randomized control trials examining antiinflammatory drugs such as TNF-a inhibitors , though preliminary, have yielded encouraging results . If larger studies show that antiinflammatory drugs can lower the risk of AD, PWH may particularly benefit. Brain changes associated with future cognitive decline are evident in midlife, several years before cognitive impairment in aMCI and AD . Additionally, longitudinal research studies have shown that more subtle differences in episodic memory in midlife is associated with a decline in memory years later . Additionally, Jak et al. found that midlife memory performance is associated with hippocampal atrophy. As a result, there has been a shift in the aging field to characterize and identify middle-aged adults in the preclinical phase of AD rather than primarily focusing on elderly cohorts in which symptoms and pathology are already present . Furthermore, there is a growing literature suggesting that midlife risk factors are associated with future cognitive decline, suggesting that midlife may be a critical time point when some interventions may be efficacious in augmenting cognitive trajectories . In the HIV literature, most aging research has focused on PWH in midlife.

Elevated levels of BI may contribute risk for both anxiety disorders and substance use disorders

These high levels of sensitivity to uncertain threat in individuals with alcohol use disorder are also positively associated with self-reported coping motives for use. To further explore BI’s potential risk for, or protection against, substance use, studies have examined BIS and BAS levels on substance use outcomes. These studies have focused on undergraduate populations and yielded mixed results with some studies showing no association between BIS levels and substance use , others showing a positive association between BIS levels with substance use problems , and still others showing a positive association between BIS levels and substance use but only for high BAS levels . Given these conflicting results, Morris et al. used a cross-sectional design to examine whether BIS and BAS were indirectly associated with alcohol problems through coping and conformity motives among undergraduate students. Results indicated that those high in BIS levels were more likely to experience alcohol problems due to greater coping and conformity motives for use. Importantly this finding was independent of levels of BAS and high BAS levels only further strengthened these relationships. Taken together, these results highlight BI’s nuanced pathways for high or low risk for substance use and demonstrate the need to investigate potential additional factors contributing to the relationships between BI and substance use. Ethnicity may be one such important moderator of the relationships between BI, anxiety, and substance use. Hispanic/Latinx youth have consistently displayed increased rates of anxiety symptoms, anxiety disorders, and initial rates of substance use when compared to their non-H/L peers . The greater frequency and intensity for which H/L youth experience threats including increased exposure to crime, community violence, chronic stress, and racial discrimination may heighten levels of BI in H/L youth . In fact, H/L adults have displayed increased attentional biases to threat as compared to non-H/L adults . Cultural values may also further impact BI’s association with anxiety in H/L youth. Schneider and Gudiño showed a positive relationship between BI and anxiety symptoms in H/L adolescents and that this relationship was strongest for those H/L adolescents reporting high levels of Latino cultural values.

More specifically, H/L youth may also experience increased anxiety due to heightened social stigma of mental illness in the H/L community and factors related to collectivist cultural values, immigration, and acculturation that, especially in combination,grow light shelves put H/L youth at increased risk when compared to other racial/ethnic groups . It is possible that the combination of increased exposure to stressors and traumatic experiences as well as the context of heightened social stigma and collectivist cultural values may dissuade H/L youth from utilizing social support as a form of coping with their anxiety. Since high levels of BI may lead to alcohol problems through coping and conformity motives and H/L youth may experience greater exposure to substance use as they have the highest initial rates of substance use, H/L youth high in BI may be uniquely at risk for substance use. Therefore, H/L ethnicity may moderate the relationship between BI and substance use. However, it is presently unclear whether the strengths of the relationships between BI, anxiety, and substance use differ based on H/L ethnicity. Therefore, the present study will prospectively investigate the relationships between BIS scale scores , anxiety, and substance use and whether H/L ethnicity moderates such relationships in youth from the Adolescent Brain Cognitive Development Study at baseline , 1-year follow-up , and 2-year follow-up . Logistic regressions were conducted using the “glm” function in R to evaluate the impact of baseline BIS scores and the interaction between baseline BIS scores and ethnicity on past-year substance use at 1-year follow-up and past-year substance use at 2-year follow-up . Linear regressions were conducted using the “lm” function in R to evaluate the impact of baseline BIS scores and the interaction between baseline BIS scores and ethnicity on 1-year follow-up CBCL DSM-5 anxiety problems T-scores and 2-year follow-up CBCL DSM-5 anxiety problems T-scores. All analyses were conducted in version 4.2.3 of R. While the majority of youth did not report substance use at baseline , 0.50% of H/L youth and 0.42% of non-H/L youth did endorse use at baseline. At 1-year follow-up 0.22% of the sample endorsed any substance use and 0.74% endorsed any substance use at 2-year follow-up.

Baseline past-year use days was dichotomized into no past-year substance use and past-year substance use and included as a covariate. Past-year substance use at baseline was included in both models in which past-year substance use at follow-up was an outcome. Past year substance use at 1-year followup was also included as a dichotomous covariate in the model predicting past-year substance use at 2-year follow-up. To control for their effects on anxiety and substance use, all models included the following covariates: mean baseline BAS scores, race, sex, age, highest parental income, and highest parental education.BI has been shown to concurrently and prospectively predict anxiety, while the association between BI and substance use has been mixed. It is possible that the relationship between BI and substance use varies by social and contextual factors. H/L youth in particular may have stronger relationships between BI, anxiety, and substance use. The present study evaluated the prospective relationships between BIS scores, anxiety, and substance use in youth across 1- and 2- year follow-ups of the ABCD study and whether these relationships differed by H/L ethnicity. Results indicated that baseline levels of BIS scores prospectively and positively predicted anxiety symptoms at both 1- and 2-year follow-ups . The relationship between baseline levels of BIS and follow-up levels of anxiety did not differ by ethnicity. Baseline levels of BIS also prospectively predicted increased likelihood of substance use at 2-year follow-up , but only for H/L youth and not for non-H/L youth. No main effects of, or interactions between, ethnicity and BIS scores were found on substance use at the 1-year follow-up. The results showing that baseline BIS scores prospectively and positively predicted anxiety symptoms across the follow-ups are consistent with prior literature on the relationship between BI and anxiety . While prior studies have shown that H/L youth report higher levels of anxiety than non-H/L youth , the present study did not find any ethnic differences in the strength of the relationship between BI and anxiety. It is possible that ethnic differences in anxiety depend on the measure of anxiety . H/L are more likely to experience and report physiological symptoms of anxiety . The CBCL DSM-5 anxiety problems scales may not best represent H/L youth’s experience of anxiety.

Additionally,other risk factors for anxiety may play a more important role in H/L youth’s experience of anxiety and better explain the ethnic differences in anxiety in youth. For example, individual differences in sensitivity to uncertain threat may be a stronger predictor of anxiety, and particularly for H/L . Results related to the relationship between BIS scores and substance use varied across the follow-up years. The lack of association between BIS scores and likelihood of substance use at 1-year follow-up may be due to the fact that substance use at 1- year follow-up was infrequent and did not greatly increase from baseline. While overall substance use increased at 2-year followup in the sample, BIS scores only predicted increased likelihood of substance us in H/L youth . Similar to the results of Morris et al. , these results were independent of levels of BAS scores. This finding is also consistent with results from Chen and Jacobson showing that H/L youth have the highest initial rates of substance use. H/L youth’s increased exposure to crime, community violence, chronic stress, and racial discrimination may also increase coping and conformity motives which in turn may increase likelihood of substances use. It is possible that high BI, in addition to, or in conjunction with, additional risk factors such as increased access to substances, reduced parental monitoring, and association with deviant peers may uniquely contribute to risk for early use of substances in H/L. Further research is needed to understand whether and how such risk may change as rates of substance use change across development. The present study had several limitations and future directions. While the longitudinal nature of the ABCD study allowed for the investigation of prospective and not just concurrent relationships between BIS scores, anxiety, and substance use, it is possible that the age of the sample at baseline and through the followups is still too early to best capture these relationships. As BI is often first assessed in infancy or early childhood , the strength of the relationships between BI, substance use, and anxiety may vary across development and the lifespan. Relatedly,vertical grow system assessing BI via behavioral observation in infancy or early childhood may yield different results than the self-reported BIS scale scores utilized in the present investigation. Additionally, as rates of substance use increase across adolescence and early adulthood and use trajectories vary between ethnicities , the relationships between BIS scores, ethnicity, and substance use may vary based on the time point in which substance use is measured. These relationships may also vary across H/L youth and could differ based on factors such as time living in the US, social stigma, acculturation, language, nativity, and socioeconomic status . Lastly, the ABCD study sample is not a clinical or treatment seeking sample and utilizing clinical samples may impact the strength of the relationships explored in the present study. Additional prospective studies are needed to understand how BIS scores and ethnicity relate to substance use as use increases in future follow-up years of the ABCD Study.

Additional research is also needed to understand how factors such as trauma exposure, stress, cultural values, discrimination, coping motives, conformity motives, etc. may mediate the relationship between BI and substance use in H/L youth. In conclusion, high levels of BIS prospectively predict increased rates of anxiety symptoms in both H/L and non-H/L youth. However, BIS scores uniquely predict increased likelihood of substance use for H/L youth. Future studies are needed to further understand the mechanisms ‘underlying the relationship between BI and substance use in H/L youth that will provide a scientific basis to better inform prevention and intervention programs for the H/L community. Alcohol consumption accounts for 5.9%, or roughly 3.3 million, deaths globally each year . Although alcohol use alone represents a serious public health concern, high comorbidity rates have been observed at an epidemiological level between alcohol and nicotine use , such that 6.2 million adults in the United States endorsed both an alcohol use disorder and dependence on nicotine . Moreover, an individual is three times more likely to be a smoker if he/she is dependent on alcohol and those who are dependent on nicotine are four times more likely to be dependent on alcohol . Given these statistics, it is evident that heavy drinking smokers comprise a distinct sub-population of substance users that warrant unique investigation. Magnetic resonance imaging studies that have focused specifically on the effects that alcohol use may have on brain morphometry have investigated the relationship between drinking variables, such as lifetime duration of alcohol use or lifetime alcohol intake and brain structure in current alcohol users. For example, Fein et al. found lifetime duration of alcohol use was negatively associated with total cortical gray matter volume in alcohol dependent males, but not in light drinkers. Moreover, findings from Taki et al. suggest a significant negative association between lifetime alcohol intake and gray matter volume reductions in the bilateral middle frontal gyri among non-alcohol dependent Japanese men. A recent study , however, found no significant relationship between lifetime alcohol consumption and gray matter volumes in a sample of 367 non-alcohol dependent individuals. Given these contrasting findings, it is uncertain whether quantity variables, such as lifetime alcohol intake or duration of alcohol use account for many of the gray matter volume reductions observed with continued alcohol use. Various studies have implicated several different regions of gray matter atrophy in alcohol dependent individuals, such as the thalamus, middle frontal gyrus, insula, cerebellum, anterior cingulate cortex , and several prefrontal cortical areas . Due to these heterogeneous results, a meta-analysis was conducted, which concluded that there were significant gray matter decreases in the ACC and left dorsal striatum/insula , right dorsal striatum/insula , and the posterior cingulate cortex in alcohol dependent users relative to healthy controls .

Rates of new infections due to sexual transmission among non-injection drug users are increasing

The results show that the majority of patients with opioid use disorder developed this disorder following the presence of chronic pain. A plausible explanation for some of these cases, although not directly demonstrated by the data collected, is iatrogenic causation via use of opioid medication prescriptions for pain. As hypothesized, this group, as well as the OUD First group and Same Time group, had greater rates of co-occurring psychiatric and medical conditions compared to the No Pain group. Patients with mental health and multiple pain problems often present with more physical and psychological distress, resulting in greater frequency of opioid prescribing in primary care practices . Some unexpected, though not totally surprising, differences emerged between the OUD First and Pain First groups. The OUD First group generally had higher rates of other substance use disorders, commensurate with rates in the No Pain group. This was not unanticipated, as both groups were early-identified addiction patients and may have more genetic and environmental predisposition to developing substance use disorders than did the Pain First Group. The Pain First group had generally higher rates of co-occurring medical problems than did the OUD First group. Part of the explanation for this phenomenon may be related to the age of the Pain First group; that group was older and thus more prone to medical illness. It may also be possible that the Pain First group had a longer duration of pain, which contributed to declining health status. Several limitations should be considered when interpreting the results of this study. This was a study using medical record data. As in any research that uses data from medical records, variation in physician documentation and health insurance requirements may introduce bias in the data that are captured. The clinical data were initially recorded for clinical reasons and not specifically for research purposes, so the accuracy of the data may be less than that collected for research purposes. Further, as in other records-based research,vertical horticulture we do not have the information about patient diagnoses outside the system in our study and therefore are unable to ascertain the new OUD diagnosis except that it’s the first OUD diagnosis in the healthcare system under study.

Participants were predominantly white patients living in the Los Angeles area of the United States, potentially limiting generalizability to patients in other regions. Our findings are dependent on the extent, accuracy, and validity of the data available in the EHR dataset. For example, because OUD diagnosis information was obtained from the EHR, we were not able to distinguish if prescription or nonprescription opioids were used or the route of administration. Both mislabeling of people who do not actually have OUD and under-recognition of true OUD diagnoses could affect the true prevalence of OUD in the sample. Since addiction can be under-recognized in the EHR, it is possible that a subset of patients may not have been identified as having an OUD; thus, there may be some patients in the Pain First group that may actually belong in the OUD First group. Despite these limitations, the study revealed some important findings. As would be expected, the majority of patients in this general healthcare or medical setting were white and with private insurance or the resources to pay for their healthcare, as opposed to being black or members of Hispanic ethnic minorities, and without health insurance, who are more often treated in the public treatment system in Los Angeles. Nevertheless, comorbidities are common among patients in both settings. Somewhat surprising is that the rates of co-occurring chronic pain conditions and mental disorders appear even higher than most rates reported in the literature in connection with OUD, often heroin use disorder, treated in public settings. However, medical conditions among OUD patients treated in publicly funded programs are mostly based on self-report, whereas the present study allowed the delineation of the specific rates of several major co-morbid physical health and other disease diagnoses among OUD patients in a general medical setting. This study demonstrated that regardless of demographic differences, OUD is similarly associated with high morbidity among patients in the private sector as in the public sector, which put them at high risk for mortality. The Pain First group demonstrated the highest rates of physical and mental health problems.

As discussed earlier, opioid prescriptions for pain in some of these individuals could have increased the risk for OUD and related problems. On the other hand, because screening for drug use is not mandated in primary care and some other medical settings, OUD may not be recognized and treated until very late in the addiction course, exacerbating the negative consequences of the disorder. Regardless of the potential causes, expanding training for medical professionals to improve screening, early intervention, support, and monitoring could prevent some of the excess morbidity associated with OUD. Furthermore, implementation of recent CDC guidelines addressing opioid prescribing for chronic non-cancer pain may provide additional risk mitigation in patients with chronic pain prior to their development of OUD . Comorbid OUD and chronic pain complicates treatment decision-making, predicts poor outcomes, and increases healthcare costs . Similarly, studies of healthcare claims data reveal that the most challenging and costliest OUD patients had high rates of preexisting and concurrent medical comorbidities and mental health disorders . The present study reveals the type and extent of comorbidities among OUD patients, results that support improving clinical practice by addressing the complex treatment needs in this population. Finally, studies utilizing the EHR data of patient populations with substance use disorders are important in identifying the scope of the problem and the extent of medical, mental health, and substance use comorbidities that necessitate better models of assessment and coordinated care plans.The human immunodeficiency virus epidemic is shifting away from people who inject drugs , as most new cases of HIV in the U.S. are attributed to unsafe sexual practices. In 2014, sexual contact comprised 94% of new HIV infections in the U.S. . Among PWID, sexual risk behaviors are independently associated with HIV transmission, and may be a larger factor in HIV transmission than injection behavior . Sexual risk behaviors that lead to the transmission of HIV and substance use are intertwined behaviors.

Stimulant use, in particular, is associated with greater sex risk behaviors , including having unprotected sex . Prescription medications, including sedatives and painkillers, are also associated with sexual risk behaviors . Moreover, moderate drinking and having an alcohol dependence diagnosis have been associated with an increased likelihood of having multiple sex partners. Having sex under the influence of drugs and/or alcohol enhances sexual risk behaviors and is more strongly associated with new infections of HIV than is unprotected receptive anal intercourse with a partner of unknown HIV status . Substance use can negatively impact judgment and decision making, leading to sexual risk behaviors , such as trading sex for drugs or money , unprotected sexual intercourse , and unprotected sex with multiple partners . Alcohol users are likely to seek the immediate rewards without considering the long-term consequences while under the influence . It is important to consider the trajectories of substance use and sexual risk behaviors concurrently in order to decrease the transmission of HIV. Substance use disorder treatment, including methadone maintenance programs and outpatient drug free settings, may be an important venue for prevention of sexual transmission. While enrollment in drug treatment reduces drug-related HIV risk behaviors, such as injection drug use ,hydroponic rack system many substance users in treatment continue to engage in sex risk behaviors . As substance use is linked to sexual risk behaviors that can transmit HIV, it is possible that decreases in substance use may coincide with decreases in risk behaviors. Little is known about the temporal relationship between drug and alcohol use severity and high risk sexual behaviors among individuals in substance use treatment . The current study extends past research by examining whether reductions in alcohol and drug use severity predicted reductions in sexual risk behaviors among men in SUD treatment who were followed for a six month period. We hypothesized that decreases in drug and alcohol use at follow-up would coincide with decreases in sex risk behaviors. Participants were enrolled in a multi-site clinical trial of the National Institute on Drug Abuse Clinical Trials Network designed to test an experimental risk-reduction intervention, Real Men Are Safe , a five-session intervention that included motivation enhancement exercises and skills training, against a standard one-session HIV education intervention that taught HIV prevention skills. The intervention was delivered by counselors in SUD treatment programs, and approved by the local Institutional Review Boards. Details about this study have been published in greater detail elsewhere . In the parent study, participation was restricted to men in SUD treatment, who were at least 18 years of age, reported engaging in unprotected vaginal or anal intercourse during the prior six months, were willing to be randomly assigned to one of two interventions and complete study assessments, and were able to speak and understand English. HIV status was not assessed as part of this study. Exclusion criteria included gross mental status impairment, which was defined as severe distractibility, incoherence or retardation as measured by the Mini Mental Status Exam or clinician assessment, or having a primary sexual partner who was intending to become pregnant over the course of the trial.

All participants enrolled from methadone maintenance needed to be stabilized in treatment for at least 30 days to ensure the greatest likelihood that they had achieved a stable dose of methadone before starting the intervention groups. Participants were examined prior to receiving the clinical intervention and six months following the intervention. All participants provided informed consent prior to participating.Participants were recruited from seven methadone maintenance and seven outpatient drug free treatment programs in the U.S. that are affiliated with the CTN to participate in a research study on HIV risk reduction interventions. These modalities were chosen as the program’s counselors were trained to deliver the intervention. The treatment programs represented different geographic regions, population density, and HIV prevalence rates. Programs were located in U.S. states that included California, Connecticut, Kentucky, New Mexico, New York, North Carolina, Ohio, Pennsylvania, South Carolina, Washington, and West Virginia; they treated patients in urban , suburban , and rural areas . Recruitment was accomplished through posters and fliers posted in clinic waiting rooms, announcements about the study to clinic patients at group therapy meetings, directly through a participant’s individual counselor, and at clinic “open houses” designed to introduce the study to clinic patients. Most participants from the drug-free outpatient clinics were recruited close to treatment entry, to reduce the possibility of early dropout. Assessments were conducted at baseline, prior to randomization, and six months after. Alcohol and drug use severity were assessed with the Addiction Severity Index-Lite , a standardized clinical interview that provides problem severity profiles in seven domains of functioning by providing an overview of problems related to substance use, in addition to days of use . This instrument has been used in many studies of drug and alcohol abusing populations and its reliability and validity are well-established . Composite scores for each problem domain are derived ranging from zero to one, with higher scores representing greater need for treatment. For the purposes of this study, only the composite scores for the alcohol and drug domains were analyzed. These composite scores are calculated based on the number of days of recent drug and alcohol use , problems arising from this use, and the desire for seeking treatment. We also provided days of recent use of alcohol to intoxication, cannabis, heroin, cocaine, sedatives/hypnotics/tranquilizers, and other opiates.In bivariate analysis, we compared sex risk behaviors, recent substance use, and ASI drug and alcohol composite scores at baseline and follow-up to monitor changes over time. As the ASI drug and alcohol composite scores did not meet the conditions of normality, we used Mann-Whitney U tests and Spearman correlations. Next, we compared sex risk behaviors and ASI composite scores at baseline and at six month-follow-up. Wilcoxon signed-rank tests were used for continuous data and categorical variables with more than two levels and McNemar’s tests were used for dichotomous categorical data . Multinomial multi-variable logistic regression analysis was used to test the hypothesis that reductions in ASI alcohol and drug use severity composite scores would predict reductions in sexual risk behaviors.

A short description of the activity in question was included to help faculty decide on the point values assigned

During the first year , the requirements included only conference and module participation. The residency assessment requirement was subsequently enacted in the following year . Table 1 lists the final baseline education expectations required of faculty members. Before employing these education requirements, all faculty members were notified of the consequences of not fulfilling expectations, which included ineligibility for any academic incentive and an inability to participate in the voluntary ARVU system.In May 2018, stage two began, which involved the creation of an ARVU system to encompass all other academic activities. It was decided that the ARVU system would be voluntary, but to participate the baseline education expectations outlined in stage one had to be fulfilled. For the first step of this stage, the vice chair for education created a list of preliminary activities to be included in the ARVU system, such as teaching, lecturing, publications, grants, committee memberships, and leadership positions. These additional activities were ones in which faculty were already participating that aligned with the academic mission of the department, but had not been captured within the baseline education expectations, did not earn clinical hours reduction from the department or institution, or were not an implicit part of a faculty member’s role based on his or her leadership position. The thought was that activities that earned a clinical reduction in hours were already being financially rewarded, and this system was designed to recognize activities not yet distinguished. An example includes fellowship activities, which were not included because fellowship directors have a reduction in clinical hours to support their leadership role. After the initial list was assembled, it was shared with a select group of 11 leaders within the department, including residency leadership, undergraduate medical education leadership, fellowship directors, the research division, and the pediatric emergency medicine division. The participants were selected due to their various leadership roles in the department,cannabis drying system their dedication to scholarly achievement in their own careers, and the high priority they placed on these activities within their respective divisions.

These qualifications placed these faculty members in a prime position to help generate a comprehensive list of activities relevant to each division. After multiple discussions and written communications using a modified Delphi method, the group reached consensus on the activities that were to be included. The unique part of this project was the third step, which included a survey that was created and analyzed using Qualtrics software and distributed to a group of 60 faculty members across the department. These faculty members were chosen out of a total of 123 because they were identified as department members who regularly participated in the activity list created by the leadership group. Because these faculty members were the most active in these activities, they were in the best position to review the list and evaluate each activity to its fullest. Furthermore, because it was decided that the ARVU system would be voluntary, they were deemed the faculty most likely to be invested in and use this new system. Finally, one of the goals of this mission was to get faculty buy-in as they were the most important stakeholders in this endeavor, and this was achieved by allowing them a voice and to feel empowered in the final steps of this project. The survey included all agreed-upon activities and asked faculty to rate each on a scale from one to four . The 11 faculty members who contributed to the final list of activities created these descriptions. Effort was defined by the time needed to commit or prepare for a particular activity, the ongoing effort needed to sustain the activity if it involved a longer commitment than just one session, and whether the activity required a passive presence or more active participation. For example, activities that required a sustained effort included such things as grant involvement, committee membership, or a leadership position. As expected, some subjectivity was involved in the voting for various reasons, such as the activity being one in which the responsive faculty member participated in himself or herself, or differing opinions regarding how much preparation time might be needed for such things as a lecture.

To help reduce this bias, the survey was sent to many faculty members with different roles and responsibilities to obtain a consensus and to dilute idiosyncratic points of view. Furthermore, the knowledge of and dedication to each activity that the chosen faculty members had and the descriptions provided helped to further reduce bias in the points system. The survey also included free-text fields where faculty could input additional activities that they felt should be added to the list. Of the 60 faculty members surveyed, 49 responded and completed the survey in its entirety. The activities, ranked from highest to lowest based on the mean score including standard deviations, are presented in Table 2. The standard deviation was less than one for all activities included in the survey. The mean of each activity was translated into final points to be awarded in the ARVU system. Activities with higher means earned more points. Any activities that were similar in description and mean score were assigned the same number of final points. We introduced the final list and point system at a faculty meeting prior to implementing, and after this final feedback round, we launched the system in December 2018. The free-text responses were also reviewed, and these activities were added to the list and also voted on by the faculty group to create the final list with points. The next steps for the project included creating a database where faculty could log their completed activities. We created a Google form that listed all activities in the ARVU system where faculty members could select the activity in which they participated . Each activity had an associated drop down menu that asked for additional information, such as title, date, location, description, proof of activity, and an ability to upload documents. We then created a dashboard in the analytics platform Tableau , containing all activities. Statistics for the baseline educational expectations automatically loaded into the dashboard and could not be edited by faculty members. The ARVU activities logged into the Google form also fed directly into the dashboard for display. The full dashboard displayed each faculty member’s baseline education expectations, whether they had met requirements, the activities that they had entered into the ARVU point system, and total points earned to date . Final points were earned after academic leadership reviewed, approved, and signed off on each submitted activity.

Each month,cannabis vertical farming the system automatically e-mailed a link to each individual’s dashboard notifying faculty how many points they had earned to date and of any participation deficiencies. The medical school requires a teaching portfolio for faculty seeking promotion on the scholar track. This portfolio requires faculty to document their achievements in the following categories: teaching effort, mentoring and advising, administration and leadership, committees, and teaching awards. All ARVU activities were reviewed and categorized based on the elements of the teaching portfolio. These activities not only show up as itemized items with points, but they are also grouped into the appropriate portfolio category and are displayed on each individual faculty member’s dashboard. This allowed each faculty member to see how much scholarship they had completed within each of the teaching portfolio categories and in which areas they were lacking that deserved more attention. This provided faculty with a readily accessible repository of activities that could be transferred directly into the correct category of their teaching portfolio, facilitating tracking of activities upon which one needed to focus for promotion. A total of 123 faculty members were expected to participate in the baseline education expectations. At the end of the academic year in June 2018, 107 faculty had met requirements. Failure was defined as not attending the required number of conferences per year or not participating in the module system. Of the 16 who did not meet expectations, 94% signed up for conference modules to participate in specific activities, but none of them met overall required conference attendance. Of the deficient faculty, five worked full time at 28 or fewer hours, 10 were full time at more than 28 hours, and one was part time. Those who did not meet education expectations were notified and had their year-end AY 2017-18 financial incentive reduced to reflect this deficiency. We compared an individual faculty member’s conference attendance in AY 2016-17 and AY 2017-18 to determine any changes after implementing the new expectations. Overall, faculty attended 21% more conference days after expectations were implemented compared to the prior year. Preliminary data for the following AY 2018-19 reveals that conference attendance increased by 15% . The number of resident assessments completed in AY 2017-18 among all faculty was 2837 compared to preliminary AY 2018- 19 assessments of 4049, resulting in a 30% increase since expectations went into effect. To date, faculty across the department have logged a total of 1240 academic activities in the database. The distribution of points across categories is highlighted in Table 3 with most points earned through teaching activities at the medical school or through other scholarly work that doesn’t necessarily fit into the other categories of the teaching portfolio. Leadership will review each faculty member’s individual records to determine if they have met baseline education expectations.

The faculty who meet expectations will receive the set baseline incentive and have the potential to earn more financial incentive based on the number of points they have earned in the ARVU system. Once all the data is analyzed, the points will be converted into financial bonus amounts based on the number of faculty who are eligible and the amount of funds available.This project has resulted in preliminary positive effects on both education and documentation of scholarly work within our department. The first stage resulted in an overall increase in conference attendance and participation even prior to implementing the ARVU system. It is possible that these positive findings were a result of the academic incentive being dependent on meeting education expectations. However, in offline discussions with multiple faculty members, it appears that there was a shame factor that also contributed to improved attendance. Multiple faculty expressed their relief that many were being called out on their low attendance and participation and that faculty who had historically carried much of the teaching responsibility were now being recognized. In the same vein, resident assessments increased in the second year by a considerable amount, without any other changes being made to the system, and therefore were likely a result of the new expectations. The increase in assessments does not necessarily mean better quality, and this will need to be evaluated going forward to determine full impact. The improved participation in educational activities as a result of financial incentives or other measures is consistent with reports from other institutions and existing literature. There is a clear correlation between faculty documentation of scholarly output and the ARVU system, as there was no system in place prior that allowed tracking of activities. The increase in activities and documentation will need to be followed from year to year to draw conclusions on overall scholarly activity among individual faculty members and throughout the department. Unlike previous literature describing ARVU systems, our project has emphasized the ability to house activities in one place that can be transferred into a faculty member’s teaching portfolio, thereby further incentivizing the use of this system outside of financial rewards. We will continue to track baseline education expectations and the ARVU system across the department as well as continuously seek feedback from faculty and make changes as needed. This process will continue to be refined over time based on faculty feedback and departmental and institutional priorities. The majority of faculty who did not qualify for the academic bonus last year worked more than 28 clinical hours per week, and thus time issues may have affected compliance. To further probe this finding and facilitate educational commitments, we will solicit additional feedback from this group of faculty members to explore participation barriers that may be addressed in the future. We hope to follow the scholarly output of the department over time using the ARVU system as an estimate of faculty productivity.

Extension of critical signal functions for time-dependent conditions should be considered in selected basic-level facilities

Two hospitals could insert central venous catheters and gain intraosseous access, which is important in shock management. In terms of resources, only two of the four had a separate triage area for emergency patients. All four hospitals had an isolation room, an obstetric/gynecologic area, and a decontamination room. We surveyed hospitals on their reasons for non-compliance with signal functions, asking them to choose from among five possible causal factors. The first was training issues, taking the form of a lack of education. The second factor was related to the lack of availability of appropriate supplies, equipment, and/ or drugs. The third pertained to management issues, such as the staff being unfamiliar with the functions, and cases where other equivalent procedures could have handled the conditions. The fourth factor was policy issues, referring to cases where the government or the facility itself does not allow for compliance with the signal functions. The fifth factor was designated as “no indication,” meaning that there was no patient group who needed this function. Supplemental Table 4 describes the reasons respondents provided on the survey for each unavailable signal function. Inappropriate supplies/equipment/drugs was the most common reason, as might be expected, and shortage of human resources was another causal factor. One intermediate hospital did not agree with the use of emergency signal functions for sentinel conditions, and answered “no indication” as their reason for non-compliance.It is widely recognized that there is a huge burden caused by trauma and non-communicable diseases in LMICs, where capability for emergency care is believed to be suboptimal.Many studies have tried to assess the state of emergency care in the health facilities of LMICs. Due to the accessibility issue,vertical farming racks most studies examined teaching hospitals located in urban areas. Assessment tools were not standardized and were usually developed by the researchers themselves.

Domains for assessment were usually related to the availability of resources, and functional aspects were surveyed with qualitative measures, if any. To our knowledge, this study is the first to survey urban and rural Myanmar hospitals using ECAT, the newly developed objective tool for assessing emergency care in health facilities. Our study demonstrated that the performance of emergency signal functions in Myanmar hospitals is inadequate, especially in trauma care. Trauma care in LMICs has been regarded as a role for large hospitals, and direct referral to upper-level facilities is a common practice. Burke et al. found that lack of readily accessible equipment for trauma care and shortage of skilled staff were the main reasons for poor quality trauma care in lower-level health facilities in LMICs.Another study pointed out the limited training opportunities for trauma management in LMICs.We found similar obstacles to trauma care in Myanmar hospitals, including the unavailability of items necessary for signal functions. Unlike other LMICs, Myanmar faces a singular geographic and demographic situation. Road conditions are poor. Almost 20 million people live in areas not connected by basic roads. The roads that do exist are unpaved and narrow, contributing to the overall lack of accessibility. The cause of this problem might be found in continuous armed conflicts. Since the independence of Myanmar in 1948, a continuing civil war has devastated the population and infrastructure of the rural areas, which has led to the deterioration of the health status of the country. In areas dominated by violence, residential zones are located away from road access, and the level of medical care is behind the times. Financial support is also lacking.For example, a referral and transport from Matupi Hospital to an adjacent upper-level facility takes as long as 16 hours during rainy seasons due to road damage . In this situation, timely management of patients in a critical condition is virtually impossible, and demands for higher levels of emergency care in basic-level facilities can be raised. Moreover, the results of our study show that some intermediate-level hospitals could not provide resuscitation for critical patients due to the lack of advanced airway management, mechanical ventilators, and defibrillation. Imbalances in the quality of emergency care in both basic- and intermediate-level facilities should be addressed carefully.

However, in Myanmar’s special situation where highway infrastructure is lacking and there are problems with long transport times, the ability to administer emergency medical care at a large hospital should be established based on skilled labor and resources. Ouma et al. emphasized that all countries should reach the international benchmark of more than 80% of their populations living within a two-hour travel time to the nearest hospital.20 Although it cannot be realized in the near future, measures to alleviate accessibility problems can be applied. Thorough gap analyses to address existing challenges in remote regions will be helpful for planning. In this regard, ECAT should be validated to include a time factor, such as the referral time to the nearest upper-level facility. We identified the following urgent issues in need of remediation: 1) improvement of trauma-related signal functions in basic-level facilities; 2) improvement of trauma and critical care-related signal functions in intermediate level facilities; and 3) implementation of a comprehensive nationwide survey to uncover emergency care deficiencies in rural areas, with emphasis on the time required for referral to higher-level facilities. Our suggestions to address the issues identified in our study can be summarized as relating to the reinforcement of infrastructure and human resources within each level of facility. In addition, prehospital care and care during inter-facility transportation should receive special attention considering the unique context of Myanmar, with its dispersed residences and extremely long transport times. There has been an effort to establish formal EM in Myanmar. In 2014, the Emergency Medicine Postgraduate Diploma course provided by Australia graduated Myanmar medical officers.8 These emergency providers will be an imperative asset to setting up a modern emergency medical care delivery system in Myanmar, although most of them will practice in advanced-level facilities. Measures to build the capacity to respond to medical emergencies in rural areas should be pursued in Myanmar. There have already been efforts to improve first-aid skills among local healthcare workers who have a high degree of understanding of the local context, and to employ them as community emergency responders.

These local healthcare workers are well informed about the population, hygiene, disease distribution, and the geographical and cultural characteristics of the area; thus, they are able to provide essential first aid and find appropriate health facilities for referrals. This practice has been expanded to the concept of out-of-hospital emergency care . It refers to a wide range of emergency treatments, from the process of recognizing an emergent care situation, to the initial emergency treatments outside the hospital, and transport to the hospital.The establishment of OHEC has played a role particularly in LMICs by reducing mortality rates by 80%, especially in trauma cases.Since 2000, several organizations have implemented the trauma training course program with non-physician clinicians in Eastern Myanmar.The program comprises various skills for carrying out the initial treatment of trauma, taught through simple simulations and feedback. The findings indicated that survival rates improved significantly among major trauma patients following the implementation of this program. We recognize that some skills covered in the TTC, such as surgical airway management, would be relatively dangerous for health workers to perform in the field, and believe that development and implementation of a training program focused on the operation of emergency signal functions would be more practical for the rural context. Those who are trained in this program could act as prehospital emergency care providers, and also aid basic-level facilities to fill the functional gaps identified in this study. In addition to the above suggestions, a national or provincial strategic plan for reinforcing emergency care in rural areas of Myanmar should be established and implemented. Following a thorough investigational survey,vertical racking system essential resources for each level of health facility should be supplemented. Public education to recognize emergency conditions is another area to be strengthened. In many LMICs, including Myanmar, folk remedies are still commonly attempted before people seek medical attention, especially in the field of obstetrics and gynecology.Recognizing the need for emergency care is crucial because it is the first step leading the patient to the emergency medical care system. Community education should play an important role in preventing delays in the detection of emergency situations.Traditional medicine providers have been the first to participate in this training thus far, and it has been reported to be effective.Point-of-care ultrasound has emerged as an essential diagnostic tool in emergency medicine .Several studies have demonstrated that a structured curriculum is both feasible and effective in training emergency physicians to obtain and accurately interpret images with test characteristics approaching or even exceeding those of dedicated radiology-performed scans.However, less is known about the penetrance of POCUS into daily EP practice.

The emergency department poses unique challenges to implementation of diagnostic POCUS not present in other specialties with broad adoption of POCUS such as cardiology, critical care, and obstetrics: 1) the time spent with an individual patient is limited compared to other specialties; 2) ED settings vary dramatically between academic, community, rural, and urban practices, and each environment has its own unique challenges with respect to availability of POCUS and training of clinicians in ultrasound and 3) the breadth of POCUS applications in the ED is considerably greater than in other specialties. Guidelines from the American College of Emergency Physicians endorse 12 core applications. The degree of experience necessary to obtain competency in image acquisition and interpretation, while not clear, appears to be highly variable between these applications.As a result, few EPs maintain competency in all 12 applications without further postgraduate fellowship training. This leads to a general reluctance to perform and rely on some POCUS exams, as EPs question the need to maintain competency in certain applications.9 Indeed, a survey of EPs in California found that most EPs do not use POCUS, and that EPs in academic environments use POCUS more regularly than their community counterparts.The challenges posed above apply both to established EPs and residents in training who are establishing practice patterns. Despite near-universal incorporation of ultrasound into resident training,a survey of recent residency graduates found limited use in daily clinical practice.This suggests that dedicated ultrasound training in most EM residency programs in North America progresses residents to the intermediate level, where they are able to effectively acquire and interpret images, but not to the level of the expert who is able to seamlessly incorporate the procedural skill into practice. We hypothesized that a number of perceived barriers may be leading to a gap in deliberate, on-shift practice, which is preventing trainees from advancing to expert levels. The goal of this study was to assess and address relevant barriers to POCUS performed on shift by residents at a single, three-year EM residency program. As such, the study had two phases. We first performed a voluntary residency-wide survey to address perceived attitudes and barriers to on-shift use of POCUS. Next we performed an intervention to address the primary barrier, namely the perceived lack of a proper charting and reporting policy.We conducted the study at an ED with an annual volume of 65,000 patients, which hosts a three-year EM residency program. The residency trains a total of 36 residents, with 12 residents per year. The study site uses the HealthLink/ EPIC electronic medical record , and all point-of-care ultrasounds are wirelessly uploaded to a middleware product . Quality assurance of all scans submitted for review is performed by ultrasound fellowship-trained EPs who rotate on a weekly basis. At the time of study performance, ultrasound training consisted of a four-hour introductory ultrasound course at the start of residency training, a four-week mandatory ultrasound rotation during the first year, and quarterly didactics with simulation and hands-on training during regularly scheduled mandatory conference. In addition, ultrasound fellowship trained faculty offered three-hour sessions, biweekly, which consisted of didactics, image review, and bedside scanning. These sessions were mandatory for the first-year resident who was on the dedicated POCUS rotation, as well as two second and third-year residents who were on a dedicated month of community ED practice. The study was performed as part of ongoing quality improvement program, not requiring institutional review board review at the study institution.At the beginning of the study, a departmental best practice, systematic, ultrasound documentation workflow was disseminated to faculty attending physicians. This workflow included saving ultrasound examinations performed or supervised by a faculty member credentialed in the relevant application.

Acute heart failure is a gradual or rapid decompensation in heart failure requiring urgent management

For each threshold number of visits in the preceding six months, the unadjusted risk of return visit was more than double among frequent visitors as compared to non-frequent visitors. The remainder of the analysis uses two or more previous visits as the threshold defining frequent visitor, unless otherwise specified. This retrospective analysis of almost seven million patient visits found that recent previous ED visits was the strongest predictor of an ED return visit. This finding held true across multiple cutoffs defining frequent use, and also under both univariate analysis and a multivariate model including patient, visit, hospital, and county characteristics. Along with recent frequent use, public insurance and three diagnoses were associated with an increased risk of a return visit. This suggests that our understanding of short-term revisits could be informed by considering frequency of ED use. A parallel thread in the literature has investigated frequent users and interventions designed to decrease ED use.Previous studies have evaluated predictors of ED revisit using patient-level data such as age, sex, race, insurance status, and diagnosis at initial ED visit, as well as hospital-level data. Surprisingly, the relationship between frequent ED use and risk of revisit after discharge is poorly characterized.Further, there is no consensus on what defines “frequent,” with definitions ranging from 2–12 visits per year.We had the striking finding that even one previous visit increased risk of return by a clinically-significant margin. This finding held true even when accounting for patient, visit, hospital, and community characteristics. Our definition focused on visits within the previous six months because other work has shown that episodes of frequent ED use are usually self-limited,4×4 grow tray which suggests that the recent past is more relevant to current health and risk of short-term return visit.

A second, related finding is that the threshold used to define frequent visitors is arbitrary with respect to risk of return visit. In the hope of informing the wide range in the literature on the number of visits or length of time used to define frequent users, we considered our definition of frequent user in relation to risk of return visit. We had the surp finding that any number of previous visits used to define frequent vs non-frequent ED users predicted an increased risk of revisit. Given that the reason to label certain patients as frequent visitors is often in order to identify them for interventions, future work may consider an outcome-based definition of frequent users and define the term “frequent” with a qualifier – eg, with respect to propensity to revisit after a visit, risk of becoming a persistent frequent user, or risk of death. As with existing literature, we transformed the number of previous visits from a continuous variable to a binary one. This has the disadvantage of losing some information, but is standard in the literature regarding frequent ED use, and can easily be applied in the midst of clinical practice.Our sensitivity analysis demonstrated that any threshold was significantly associated with return visits, suggesting that knowing whether a patient had four vs three previous visits would provide marginally more information than simply knowing the patient had more than two previous ED visits. As with the definition of frequent user, the time to return visit defining a return visit is somewhat arbitrary. While the risk of return visit is highest on the first day following the ED visit, the risk gradually decreases and, as found previously by Rising et al., there is no clear timeline that defines a return visit.This finding may suggest something other than inadequate care at the index visit is the driving factor for most short-term revisits, and that both frequent use and revisits may simply be proxies for certain patients with increased healthcare-seeking behavior. Further complicating this issue is that patients may be instructed to return to the ED for a re-evaluation.

Thus, an ED in a setting with limited outpatient resources might appear to give poor care as measured by revisits when in fact it serves to provide followup care that patients otherwise would not obtain. Despite the variation in the literature and thus our broad range of models, we consistently found that the strongest predictor of a revisit is a high number of previous visits. This finding held true in our sensitivity analysis using different thresholds for number of previous visits and also days after index visit. The observation that previous visits predicts future visits may seem obvious or mechanical, but it does not necessarily follow that a patient with one or two visits in the prior six months would be at double the risk of a revisit within three days. Further, that this relationship was stronger than any other patient, hospital, or community characteristic is an important finding that has been overlooked in the literature regarding revisits. In fact, it appears that the literature on frequent visitors and the literature regarding revisits have to this point largely functioned in parallel and have not yet begun to inform each other. Whether frequent users are merely frequently-ill people, and whether sicker patients are at increased risk of short-term revisits deserves future research. Likewise, future work should investigate the extent to which patients are frequent users because they received poor care or face limitations in their ability to obtain outpatient resources, the extent to which revisits are avoidable, and the degree to which frequent use persists over time. Understanding the extent to which follow-up with primary care, referrals to specialists, and ability to obtain further evaluation such as advanced imaging, cardiac stress test, or even a wound check is essential to understanding why patients return to the ED. The data for this study were obtained from a single multi-state physician partnership and do not necessarily generalize to other providers or provider groups, or to other populations. However, the sample size was large and spans many cities and rural areas across several states, includes a broad set of hospital owner types, a large range of hospital sizes, and both teaching and non-teaching hospitals. This source of data may lead to a biased sample with respect to patient population, hospital characteristics, and provider characteristics. In particular, the income distribution is narrower than the distribution for the entire U.S., so the patient population could have a lower proportion of low- and high income patients than typical for the U.S.

We addressed these potential sources of bias by controlling for patient demographics, patient insurance, and local income; hospital characteristics including volume and a performance metric, and clinician degree. Second, because not all hospitals within a region were observed, measures of frequent visitors and repeat visits may underestimate the actual numbers of frequent visitors and repeat visits, as patients may have gone to another ED either prior to or after the observed index visit. This limitation is typical of this research,and in this dataset patients were linked across hospitals, although this was limited to the hospitals served by this company. Thus, it is unknown whether patients had an unobserved revisit at another ED, or whether what was considered an index visit actually represented a revisit after an initial visit at another ED. Next, we were unable to distinguish between planned and unplanned return visits. Thus, a patient who is instructed to return for a check over the weekend to ensure their illness is improving, for example, would appear to be a revisit, but this should not imply that their initial treatment was inadequate or inappropriate in any way. Research using administrative datasets, such as HCUP, likewise suffers from this limitation. Finally, as with related research, this study does not identify the extent to which high rates of frequent visits and revisits are driven by patient factors, ED care, or non-ED healthcare resources. This analysis was limited in its ability to examine patient psychosocial attributes or local resources, which are likely to contribute to ED visits and revisits, although we did consider proxies for access to care: patient insurance and community-level factors such as income and number of hospitals in the county. The condition covers a large spectrum of disease, ranging from mild exacerbations with gradual increases in edema to cardiogenic shock. HF affects close to six million people in the United States and increases in prevalence with age.6-11 Currently, the emergency department initiates the evaluation and treatment of over 80% of patients with AHF in the U.S.As the population ages, increasing numbers of patients with HF will present to the ED for evaluation and management. However,greenhouse racking making the correct diagnosis can be challenging due to the broad differential diagnosis associated with presenting symptoms and variations in patient presentations. Over one million patients are admitted for HF in the U.S. and Europe annually.In the U.S. population, people have a 20% risk of developing HF by 40 years of age.HF is more common in males until the age of 65, at which time Brooke Army Medical Center, Department of Emergency Medicine, Fort Sam Houston, Texas University of Texas Southwestern Medical Center, Department of Emergency Medicine, Dallas, Texas Rush University Medical Center, Department of Emergency Medicine, Chicago, Illinois males and females are equally affected.Patients with HF average at least two hospital admissions per year.

Among patients who are admitted with AHF, over 80% have a prior history of HF, referred to as decompensated heart failure.De novo HF is marked by no previous history of HF combined with symptom appearance after an acute event.Mortality in patients with HF can be severe, with up to half of all patients dying within five years of disease diagnosis.Other studies have found that post-hospitalization mortality rates at 30 days, one year, and five years are 10.4%, 22%, and 42.3%, respectively.AHF expenditures approach $39 billion per year, which is expected to almost double by 2030.Normal cardiac physiology is dependent on appropriately functioning ventricular contraction, ventricular wall structural integrity, and valvular competence.At normal functional status, a person’s stroke volume is approximately one milliliter per kilogram for every heartbeat.SV is dependent upon the preload , after load , and contractility . In patients with HF, left ventricular dysfunction can be due to impaired LV contraction and ejection , impaired relaxation and filling , or a combination of both.An alternate way of defining this would be by the effect on ejection fraction . HF with preserved EF refers to patients with an EF > 50%, while HF with reduced EF refers to patients with an EF < 40%. Borderline preserved EF is defined by HF with an EF of 41-50%.The most common form is HF with reduced EF, which is primarily related to a decrease in the functional myocardium .Additional causes include excessive pressure overload from hypertension, valvular incompetence, and cardiotoxic medications. HF with preserved EF occurs due to impaired ventricle relaxation and filling, which accounts for 30-45% of all HF cases. This form of HF results in increased end-systolic and diastolic volumes and pressures and is most commonly associated with chronic hypertension, coronary artery disease, diabetes mellitus, cardiomyopathy, and valvular disease. Both systolic and diastolic HF can present with similar symptoms due to elevated, left-sided intracardiac pressures and pulmonary congestion.Right ventricular failure most commonly results from LV failure. As the right side of the heart fails, increased pressure in the vena caval system elevates pressure in the venous system of the gastrointestinal tract, liver, and extremities, resulting in edema, jugular venous distension, hepatomegaly, bloating, abdominal pain, and nausea.High-output HF is associated with normal or greater-than-normal cardiac output and decreased systemic vascular resistance.The associated decrease in after load reduces arterial blood pressure and also activates neurohormones, which increase salt and water retention. Diseases that may result in high-output HF include anemia, large arteriovenous fistula or multiple small fistulas, severe hepatic or renal disease, hyperthyroidism, beriberi disease, and septic shock.In AHF, peripheral vascular flow and end-organ perfusion decrease, causing the body to compensate by neurohormonal activation , ventricular remodeling, and release of natriuretic peptides. These mechanisms are chronically activated in HF, but worsen during acute exacerbations, resulting in hemodynamic abnormalities leading to further deterioration. Continued progression can result in a critical reduction to end-organ blood flow, leading to severe morbidity and mortality.Patients with HF are classified into one of four classes, primarily determined by daily function, using the New York Heart Association, American College of Cardiology/American Heart Association, or European Society of Cardiology Guidelines .

We conducted the study at a refugee clinic and at resettlement and post-resettlement agencies

Past 30-day HSD use at follow-up was significantly lower for intervention patients . While the control group reported no change in HSD use over time , the intervention group reported a significant unadjusted mean reduction of 4.4 days from baseline to follow-up . Among the 47 participants who provided urine samples, those in the intervention group were less likely than controls to test positive for their HSD . A logistic regression analysis for testing HSD positive that controlled self-reported baseline HSD use confirmed that intervention group participants were less likely than those in the control group to test HSD positive at follow-up . In the intent-to-treat linear regression model with multiple imputation of missing values , intervention patients reduced their HSD use an average of 4.5 more days in the past month than did controls, controlling for baseline HSD use, high school graduation, number of children under 18 living with them, and having been sexually assaulted before they were 18 years old. The complete sample regression with the same covariates for the 51 patients with follow-up data produced similar results , with intervention patients reducing their HSD use an average of 5.2 more days than controls . Finally, among the 32 patients in the complete sample who reduced their HSD use by a day or more, 28 patients who reported risky alcohol use reduced that use by an average of 0.3 days and 17 patients who disclosed smoking reduced their tobacco use by an average of 2.5 days . Neither change was significant . In this study of mostly Latino primary care patients of an FQHC, the QUIT brief intervention group reported a 40% decline in mean HSD use, corresponding to an adjusted 4.5-day reduction in reported past month HSD use by 3-month follow-up compared to controls ; there was no compensatory increase in use of alcohol or tobacco. This degree of drug use reduction is meaningful clinically according to norms for reductions in marijuana use in clinical trials . The trial has clinical significance as its findings could apply to 12% of our study clinic patients that screen positive for risky drug use ,drying room and represents significant potential public health impact for the 20 million risky drug users in the US if replicated in other clinic populations , 2012; U.S.

Department of Health and Human Services Office of the Surgeon General, 2016. The findings are important given the limited number of randomized trials of screening and brief intervention for risky drug use in primary care, and notable in that the findings affirm the positive findings of the QUIT trial. Some distinctive characteristics of the QUIT intervention that may contribute to its greater success than other brief intervention protocols designed to address risky drug use in primary care include: use of primary care clinicians to deliver brief advice messages about drug use; regular weekly “learning community” meetings among health coaches and the study team; incorporation of quality of life issues patients spontaneously raised as barriers to drug use reduction into telephone coaching sessions; embedding of drug use consent and patient assessment questions within a larger behavioral health paradigm to conceal the study’s drug focus and minimize potential contamination of the control group; and patient self-administered assessment of drug use on tablet computers. The original QUIT study, showed a significant reduction in HSD in 30-day risky drug use , 3.5 day reduction in the completer analysis in intervention compared to control patients . The positive outcomes in all of these different clinics bolstered by positive outcomes from this pilot replication suggest that QUIT may prove effective and implementable in a variety of settings and across a variety of patient demographics. Limitations of the study include: generalizability of the sample to other Latino populations, potential for social desirability bias to influence the primary outcome of self-reported drug use reduction which we tried to minimize by patients’ self-administration of survey items on a tablet computer, loss to follow-up, and small sample size which limits subgroup analysis. Over three million refugees have been resettled in the United States since Congress passed the Refugee Act of 1980.1 In 2015, there were nearly 70,000 new refugee arrivals, representing 69 different countries.1 Refugees undergo predeparture health screening prior to arrival in the U.S., and are typically seen by a physician for an evaluation shortly after arrival.

Refugees are resettled in areas with designated resettlement agencies that assist them with time-limited cash assistance, enrollment in temporary health coverage, and employment options. Refugees are initially granted six to eight months of dedicated Refugee Medical Assistance, which is roughly equivalent to services provided by a state’s Medicaid program.Following this period, refugees are subject to the standard eligibility requirements of Medicaid.3 It is important to highlight the differences between a refugee, an asylum seeker and a migrant, as this study focuses specifically on refugees. A refugee is an individual who has been forced to leave his or her home country due to fear of persecution based on race, religion, nationality, membership in a social group, or policital opinion. Refugees undergo robust background checks and screening prior to receiving designated refugee status. They are relocated only after undergoing this screening process, and have legal protection under the Refugee Act of 1980 given their status as a refugee. An asylum seeker, on the other hand, is an individual who has fled his or her home country for similar reasons but has not received legal recognition prior to arrival in the U.S. and may only be granted legal recognition if the asylum claim is reviewed and granted. As a result, asylum seekers do not have access to services such as Refugee Medical Assistance, time-limited cash assistance, or similar employment opportunities. Migrant is a general term and refers to an individual who has left his or her home country for a variety of reasons.Prior studies have shown differences in utilization of the emergency department by refugees in comparison to native-born individuals.In Australia, refugees from non-English speaking countries are more likely to use ambulance services, have longer lengths of stay in the ED, and are less likely to be admitted to the hospital.A study conducted in the U.S. evaluated refugees one year post-resettlement and demonstrated that language, communication, and acculturation barriers continue to negatively affect their ability to obtain care. These data suggest that there may be unidentified opportunities for improving the acute care process for refugee populations; however, little is known about how refugees interface with acute care facilities. Therefore, the goal of this study was to use in-depth qualitative interviews to understand barriers to access of acute care by newly arrived refugees, and identify potential improvements from refugees and community resettlement agencies. The refugee clinic was located at a tertiary care hospital in a city in the Northeast U.S. The clinic has been in operation for approximately five years and has cared for approximately 200 refugee patients yearly. At the time of the study,vertical farming units the clinic received referrals from one of the three resettlement agencies in the city. Refugee patients were seen within 30 days of arrival. Most refugees were seen for screening evaluations and transitioned to clinics near their homes after twoto three clinic visits. Refugee patients were eligible for this study if they were over 18 years of age, had capacity to consent, and had no hearing difficulties. We excluded refugees if they were deaf, unable to answer questions from an interpreter, or had acute medical or psychiatric illnesses. In the city in which the study was performed, there are three main resettlement agencies and approximately three well-known post-resettlement agencies. Resettlement agencies are responsible for receiving new refugee arrivals and assisting individuals with support for three to six months after arrival.

Resettlement employees assist refugees with establishing housing, employment, transportation, primary care, and language services. After three to six months, refugees are able to seek additional assistance at post-resettlement agencies. Post-resettlement agencies provide additional support in terms of support groups, language services, cultural activities, and case management. Employees were eligible for this study if they worked at a resettlement or post-resettlement agency, were over 18 years of age, and had no hearing difficulties. This was an in-depth interview study using semi-structured, open-ended interviews. Separate interview guides for refugees and resettlement agency employers were developed by all members of the study team. Study team members included the following: an emergency physician and investigator with expertise in qualitative methodology ; an internal medicine physician with many years of experience working at the refugee clinic ; a third-year emergency medicine resident with three years of experience working bimonthly at the refugee clinic ; a second-year EM resident with no experience at the refugee clinic , an MD/PhD student with three years of experience working at the refugee clinic and content expert on refugee studies ; and an undergraduate student with two years of experience working at the refugee clinic . The study team composition allowed for a range of expertise with individuals who had experience working with refugees and those who did not. Questions were vetted among the all members of the study team and revised to ensure that content reflected the goals of the study. Prior to interviewing resettlement and post-resettlement employees, a resettlement/post-resettlement employee interview guide was developed using the same process. Refugee interviews were conducted in person at a refugee clinic, and refugees were recruited during the study period when an interviewer was present during clinic hours. Refugees were asked to participate if a room and interpreter were available. If the aforementioned conditions were met, all refugees awaiting clinic appointments or available after their appointment were asked to participate. All of the refugees who were asked agreed to consent and participated. Interviews with refugees were conducted by two members of the study team using the Refugee Interview Guide and lasted approximately 30 minutes. A phone interpreter was used for verbal consent prior to participation and for the interview. Demographic information was collected about each participant . After interviews were completed for refugee patients, a second phase of semi-structured, open-ended, interviews were conducted in person at local resettlement and post-resettlement agencies in the region. We obtained a list of employees involved in case management, health coordination, and program development for refugees/immigrants from resettlement healthcare teams. These employees were contacted via email with information regarding the study and consent form. Of 13 employees contacted, 12 participated. Employee interviews were conducted at their respective agencies, and verbal consent was obtained prior to participation. Interviews with resettlement employees were conducted by two members of the study team using the Resettlement/Post-resettlement Employee Interview Guide and lasted approximately 20 minutes. This study was approved by the institutional review board at the University of Pennsylvania.A total of 16 interviews were completed with refugees. Participants had a mean age of 34 and 50% had completed high school. Countries of origin were Syria , Bhutan , Democratic Republic of the Congo , Burma , Sudan , Iraq , Iran and the Central African Republic . Most refugees seen at this refugee clinic undergo medical screening within one to two months of arrival. A few of the patients remained at the clinic for long-term follow-up. All refugees required an interpreter and all interpretation was done with phone interpreters. A total of 12 interviews were completed for resettlement and post-resettlement agencies. Resettlement employees interviewed represented two resettlement agencies and two post-resettlement agencies. We identified several barriers to access of acute care facilities by newly arrived refugees . The process by which refugees seek care and barriers at each step can be visualized in Figure 1.Our principal findings identify barriers throughout the process of accessing acute care for newly arrived refugees. Overall, refugees face uncertainty when accessing acute care services because of prior experiences in their home countries and limited understanding of the complex U.S. healthcare system. The unfamiliarity with the U.S. healthcare system drives refugees to rely heavily on resettlement employees as an initial point of triage or, if they are very sick, to call 911. At the resettlement agency, employees express concern about identifying the appropriate level of care to which to send a refugee client.