Author Archives: cannabisgrow

Persistence and resurgence of vector populations continues to be an important issue for malaria control and elimination

Regardless of the product, the supply of recombinant proteins is challenging during emergency situations due to the simultaneous requirements for rapid manufacturing and extremely high numbers of doses. The realities we must address include: the projected demand exceeds the entire manufacturing capacity of today’s pharmaceutical industry ; there is a shortage of delivery devices and the means to fill them; there is insufficient lyophilization capacity to produce dry powder for distribution; and distribution, including transportation and vaccination itself, will be problematic on such a large scale without radical changes in the public health systems of most countries. Vaccines developed by a given country will almost certainly be distributed within that country and to its allies/neighbors first and, thereafter, to countries willing to pay for priority. One solution to the product access challenge is to decentralize the production of countermeasures, and in fact one of the advantages of plant-based manufacturing is that it decouples developing countries from their reliance on the pharmaceutical infrastructure. Hence, local production facilities could be set up based on greenhouses linked to portable clean rooms housing disposable DSP equipment. In this scenario, the availability of multiple technology platforms, including plant-based production, can only be beneficial.Several approaches can be used to manage potential IP conflicts in public health emergencies that require the rapid production of urgently needed products. Licensing of key IP to ensure freedom to operate is preferred because such agreements are cooperative rather than competitive. Likewise, cooperative agreements to jointly develop products with mutually beneficial exit points offer another avenue for productive exploitation. These arrangements allow collaborating institutions to work toward a greater good. Licensing has been practiced in past emergencies when PMP products were developed and produced using technologies owned by multiple parties. In the authors’ experience,indoor growing trays the ZMapp cocktail was subject to IP ownership by multiple parties covering the compositions, the gene expression system, manufacturing process technology/knowhow, and product end-use.

Stakeholders included the Public Health Agency of Canada’s National Microbiology Laboratory, the United States Army Medical Research Institute of Infectious Diseases , Mapp Biopharmaceutical, Icon Genetics, and Kentucky Bio-processing, among others. Kentucky Bio-processing is also involved in a more recent collaboration to develop a SARS-CoV-2 vaccine candidate, aiming to produce 1–3 million doses of the antigen, with other stakeholders invited to take on the tasks of large scale antigen conjugation to the viral delivery vector, product fill, and clinical development.25 Collaboration and pooling of resources and know how among big pharma/biopharma companies raises concerns over antitrust violations, which could lead to price fixing and other unfair business practices. With assistance from the United States Department of Justice , this hurdle has been temporarily overcome by permitting several biopharma companies to share know how around manufacturing facilities and other information that could accelerate the manufacturing of COVID-19 mAb products.26 Genentech , Amgen, AstraZeneca, Eli Lilly, GlaxoSmithKline, and AbCellera Biologics will share information about manufacturing facilities, capacity, raw materials, and supplies in order to accelerate the production of mAbs even before the products gain regulatory approval. This is driven by the realization that none of these companies can satisfy more than a small fraction of projected demands by acting alone. Under the terms imposed by the DOJ, the companies are not allowed to exchange information about manufacturing cost of goods or sales prices of their drugs, and the duration of the collaboration is limited to the current pandemic. Yet another approach is a government-led strategy in which government bodies define a time-critical national security need that can only be addressed by sequestering critical technology controlled by the private sector. In the United States, for example, the Defense Production Act was first implemented in 1950 but has been reauthorized more than 50 times since then . Similar national security directives exist in Canada and the EU. In the United States, the Defense Production Act gives the executive branch substantial powers, allowing the president, largely through executive order, to direct private companies to prioritize orders from the federal government.

The president is also empowered to “allocate materials, services, and facilities” for national defense purposes. The Defense Production Act has been implemented during the COVID-19 crisis to accelerate manufacturing and the provision of medical devices and personal protective equipment, as well as drug intermediates. Therefore, a two-tiered mechanism exists to create FTO and secure critical supplies: the first and more preferable involving cooperative licensing/cross-licensing agreements and manufacturing alliances, and alternatively , a second mechanism involving legislative directives.Many companies have modified their production processes to manufacture urgently-required products in response to COVID- 19, including distillers and perfume makers switching to sanitizing gels, textiles companies making medical gowns and face masks, and electronics companies making respirators.27 Although this involves some challenges, such as production safety and quality requirements, it is far easier than the production of APIs, where the strict regulations discussed earlier in this article must be followed. The development of a mammalian cell line achieving titers in the 5 g L−1 range often takes 10–12 months or at least 5–6 months during a pandemic . These titers can often be achieved for mAbs due to the similar properties of different mAb products and the standardized DSP unit operations , but the titers of other biologics are often lower due to product toxicity or the need for bespoke purification strategies. Even if developmental obstacles are overcome, pharmaceutical companies may not be able to switch rapidly to new products because existing capacity is devoted to the manufacture of other important bio-pharmaceuticals. The capacity of mammalian cell culture facilities currently exceeds market demand by ~30% . Furthermore, contract manufacturing organizations , which can respond most quickly to a demand for new products due to their flexible business model, control only ~19% of that capacity. From our experience, this CMO capacity is often booked in advance for several months if not years, and little is available for short-term campaigns. Furthermore, even if capacity is available, the staff and consumables must be available too. Finally, there is a substantial imbalance in the global distribution of mammalian cell culture capacity, favoring North America and Europe. This concentration is risky from a global response perspective because these regions were the most severely affected during the early and middle stages of the COVID-19 pandemic, and it is, therefore, possible that this capacity would become unusable following the outbreak of a more destructive virus.

Patents covering several technologies related to transient expression in plants will end during or shortly after 2020, facilitating the broader commercial adoption of the technology. This could accelerate the development of new PMP products in a pandemic situation . However, PMP production capacity is currently limited. There are less than five large scale PMP facilities in operation, and we estimate that these facilities could manufacture ~2,200 kg of product per year, assuming a combined annual biomass output of ~1,100 tons as well as similar recombinant protein production and DSP losses as for mammalian cells. Therefore, plant-based production certainly does currently not meet the anticipated demand for pandemic countermeasures. We have estimated a global demand of 500–5,200 tons per year for mAbs, depending on the dose, but only ~259 tons per year can be produced by using the current global capacity provided by mammalian cell bioreactors and plant-based systems currently represent less than 1% of the global production capacity of mammalian cell bioreactors. Furthermore, the number of plant molecular farming companies decreased from 37 to 23 between 2005 and 2020, including many large industry players that would be most able to fund further technology development . Nevertheless, the current plant molecular farming landscape has three advantages in terms of a global first-line response compared to mammalian cells. First, almost two thirds of global production capacity is held by CMOs or hybrid companies ,mobile vertical grow racks which can make their facilities available for production campaigns on short notice, as shown by their rapid response to COVID-19 allowing most to produce initial product batches by March 2020. In contrast, only ~20% of fermentation facilities are operated by CMOs . Second, despite the small number of plant molecular farming facilities, they are distributed around the globe with sites in the United States, Canada, United Kingdom, Germany, Japan, Korea, and South Africa, with more planned or under construction in Brazil and China . Finally, transient expression in plants is much faster than any other eukaryotic system with a comparable production scale, moving from gene to product within 20 days and allowing the production of up to 7,000 kg biomass per batch with product accumulation of up to 2 g kg−1 . Even if the time required for protein production in mammalian cells can be reduced to 6 months as recently proposed , Medicago has shown that transient expression in plants can achieve the same goals in less than 3 months . Therefore, the production of vaccines, therapeutics, and diagnostics in plants has the potential to function as a first line of defense against pandemics. Given the limited number and size of plant molecular farming facilities, we believe that the substantial investments currently being allocated to the building of bio-pharmaceutical production capacity should be shared with PMP production sites, allowing this technology to be developed as another strategy to improve our response to future pandemics.In the past decade, the massive scale-up of insecticide treated bed nets and indoor residual spraying , together with the use of artemisinin-based combination treatments, have led to major changes in malaria epidemiology and vector biology. Overall malaria prevalence and incidence have been greatly reduced worldwide. But the reductions in malaria have not been achieved uniformly; some sites have experienced continued reductions in both clinical malaria and overall parasite prevalence, while other sites showed stability or resurgence in malaria despite high coverage of ITNs and IRS.

More importantly, extensive use of ITNs and IRS has created intensive selection pressures for malaria vector insecticide resistance as well as for potential outdoor transmission, which appears to be limiting the success of ITNs and IRS. For example, in Africa, where malaria is most prevalent and pyrethroid-impregnated ITNs have been used for more than a decade, there is ample evidence of the emergence and spread of pyrethroid resistance in Anopheles gambiae s.s., the major African malaria vector, as well as in An. arabiensis and An. funestus s.l.. Both the prevalence of An. gambiae s.s. resistance to pyrethroids and DDT and the frequency of knock-down resistance have reached alarming levels throughout Africa from 2010–2012. Unfortunately, pyrethroids are the only class of insecticides that the World Health Organization recommends for the treatment of ITNs . Furthermore, a number of recent studies have documented a shift in the biting behavior of An. gambiae s.s. and An. funestus, from biting exclusively indoors at night to biting both indoors and outdoors during early evening and morning hours when people are active but not protected by IRS or ITNs, or to biting indoors but resting outdoors. Apart from these intraspecific changes in biting behavior, shifts in vector species composition, i.e., from the previously predominant indoor-biting An. gambiae s.s. to the concurrently predominant species An. arabiensis, which prefers to bite and rest outdoors in some parts of Africa, can also increase outdoor transmission. Because IRS and ITNs have little impact on outdoor-resting and outdoor and early-biting vectors, outdoor transmission represents one of the most important challenges in malaria control. New interventions are urgently needed to augment current public health measures and reduce outdoor transmission. Larval control has historically been very successful and is widely used for mosquito control in many parts of the developed world, but is not commonly used in Africa. Field evaluation of anopheline mosquitoes in Africa found that larviciding was effective in killing anopheline larvae and reducing adult malaria vector abundance in various sites. Microbial larvicides are effective in controlling malaria vectors, and they can be used on a large scale in combination with ongoing ITN and IRS programs. However, conventional larvicide formulations are associated with high material and operational costs due to the need for frequent habitat re-treatment, i.e., weekly re-treatment, as well as logistical issues in the field. Recently, an improved slow-release larvicide formulation was field-tested for controlling Anopheles mosquitoes, yielding an effective duration of approximately 4 weeks.

The EU follows both decentralized processes as well as centralized procedures covering all Member States

Some tags may be approved in certain circumstances , but their immunogenicity may depend on the context of the fusion protein. The substantial toolkit available for rapid plant biomass processing and the adaptation of even large-scale plant-based production processes to new protein products ensure that plants can be used to respond to pandemic diseases with at least an equivalent development time and, in most cases, a much shorter one than conventional cell-based platforms. Although genetic vaccines for SARS-CoV-2 have been produced quickly , they have never been manufactured at the scale needed to address a pandemic and their stability during transport and deployment to developing world regions remains to be shown.Regulatory oversight is a major and time-consuming component of any drug development program, and regulatory agencies have needed to revise internal and external procedures in order to adapt normal schedules for the rapid decision-making necessary during emergency situations. Just as important as rapid methods to express, prototype, optimize, produce, and scale new products are the streamlining of regulatory procedures to maximize the technical advantages offered by the speed and flexibility of plants and other high-performance manufacturing systems. Guidelines issued by regulatory agencies for the development of new products, or the repurposing of existing products for new indications, include criteria for product manufacturing and characterization, containment and mitigation of environmental risks, stage-wise safety determination, clinical demonstration of safety and efficacy, and various mechanisms for product licensure or approval to deploy the products and achieve the desired public health benefit. Regardless of which manufacturing platform is employed, the complexity of product development requires that continuous scrutiny is applied from preclinical research to drug approval and post-market surveillance,cannabis vertical farming thus ensuring that the public does not incur an undue safety risk and that products ultimately reaching the market consistently conform to their label claims.

These goals are common to regulatory agencies worldwide, and higher convergence exists in regions that have adopted the harmonization of standards as defined by the International Council for Harmonization ,2 in key product areas including quality, safety, and efficacy.Both the United States and the EU have stringent pharmaceutical product quality and clinical development requirements, as well as regulatory mechanisms to ensure product quality and public safety. Differences and similarities between regional systems have been discussed elsewhere and are only summarized here. Stated simply, the United States, EU, and other jurisdictions follow generally a two-stage regulatory process, comprising clinical research authorization and monitoring and result’s review and marketing approval. The first stage involves the initiation of clinical research via submission of an Investigational New Drug application in the United States or its analogous Clinical Trial Application in Europe. At the preclinicalclinical translational interphase of product development, a sponsor must formally inform a regulatory agency of its intention to develop a new product and the methods and endpoints it will use to assess clinical safety and preliminary pharmacologic activity . Because the EU is a collective of independent Member States, the CTA can be submitted to a country-specific regulatory agency that will oversee development of the new product. The regulatory systems of the EU and the United States both allow pre-submission consultation on the proposed development programs via discussions with regulatory agencies or expert national bodies. These are known as pre-IND meetings in the United States and Investigational Medicinal Product Dossier 3 discussions in the EU. These meetings serve to guide the structure of the clinical programs and can substantially reduce the risk of regulatory delays as the programs begin. PIND meetings are common albeit not required, whereas IMPD discussions are often necessary prior to CTA submission.

At intermediate stages of clinical development , pauses for regulatory review must be added between clinical study phases. Such End of Phase review times may range from one to several months depending on the technology and disease indication. In advanced stages of product development after pivotal, placebo-controlled randomized Phase III studies are complete, drug approval requests that typically require extensive time for review and decision-making on the part of the regulatory agencies. In the United States, the Food and Drug Administration controls the centralized marketing approval/authorization/ licensing of a new product, a process that requires in-depth review and acceptance of a New Drug Application for chemical entities, or a Biologics License Application for biologics, the latter including PMP proteins. The Committee for Medicinal Products for Human Use , part of the European Medicines Agency , has responsibilities similar to those of the FDA and plays a key role in the provision of scientific advice, evaluation of medicines at the national level for conformance with harmonized positions across the EU, and the centralized approval of new products for market entry in all Member States.The statute-conformance review procedures practiced by the regulatory agencies require considerable time because the laws were established to focus on patient safety, product quality, verification of efficacy, and truth in labeling. The median times required by the FDA, EMA, and Health Canada for full review of NDA applications were reported to be 322, 366, and 352 days, respectively . Collectively, typical interactions with regulatory agencies will add more than 1 year to a drug development program. Although these regulatory timelines are the status quo during normal times, they are clearly incongruous with the needs for rapid review, approval, and deployment of new products in emergency use scenarios, such as emerging pandemics.

Plant-made intermediates, including reagents for diagnostics, antigens for vaccines, and bio-active proteins for prophylactic and therapeutic medical interventions, as well as the final products containing them, are subject to the same regulatory oversight and marketing approval pathways as other pharmaceutical products. However, the manufacturing environment as well as the peculiarities of the plant-made active pharmaceutical ingredient can affect the nature and extent of requirements for compliance with various statutes, which in turn will influence the speed of development and approval. In general, the more contained the manufacturing process and the higher the quality and safety of the API, the easier it has been to move products along the development pipeline. Guidance documents on quality requirements for plant-made biomedical products exist and have provided a framework for development and marketing approval . Upstream processes that use whole plants grown indoors under controlled conditions, including plant cell culture methods, followed by controlled and contained downstream purification, have fared best under regulatory scrutiny. This is especially true for processes that use non-food plants such as Nicotiana species as expression hosts. The backlash over the Prodigene incident of 2002 in the United States has refocused subsequent development efforts on contained environments . In the United States, field-based production is possible and even practiced, but such processes require additional permits and scrutiny by the United States Department of Agriculture . In May 2020, to encourage innovation and reduce the regulatory burden on the industry, the USDA’s Agricultural Plant Health Inspection Service revised legislation covering the interstate movement or release of genetically modified organisms into the environment in an effort to regulate such practices with higher precision [SECURE Rule revision of 7 Code of Federal Regulations 340].In contrast, the production of PMPs using GMOs or transient expression in the field comes under heavy regulatory scrutiny in the EU, and several statutes have been developed to minimize environmental, food, and public risk. Many of these regulations focus on the use of food species as hosts. The major perceived risks of open-field cultivation are the contamination of the food/feed chain,cannabis drying rack and gene transfer between GM and non-GM plants. This is true today even though containment and mitigation technologies have evolved substantially since those statutes were first conceived, with the advent and implementation of transient and selective expression methods; new plant breeding technologies; use of non-food species; and physical, spatial, and temporal confinement . The United States and the EU differ in their philosophy and practice for the regulation of PMP products. In the United States, regulatory scrutiny is at the product level, with less focus on how the product is manufactured. In the EU, much more focus is placed on assessing how well a manufacturing process conforms to existing statutes. Therefore, in the United States, PMP products and reagents are regulated under pre-existing sections of the United States CFR, principally under various parts of Title 21 , which also apply to conventionally sourced products. These include current good manufacturing practice covered by 21 CFR Parts 210 and 211, good laboratory practice toxicology , and a collection of good clinical practice requirements specified by the ICH and accepted by the FDA . In the United States, upstream plant cultivation in containment can be practiced using qualified methods to ensure consistency of vector, raw materials, and cultivation procedures and/or, depending on the product, under good agricultural and collection practices . For PMP products, cGMP requirements do not come into play until the biomass is disrupted in a fluid vehicle to create a process stream. All process operations from that point forward, from crude hydrolysate to bulk drug substance and final drug product, are guided by 21 CFR 210/211 .

In Europe, bio-pharmaceuticals regardless of manufacturing platform are regulated by the EMA, and the Medicines and Healthcare products Regulatory Agency in the United Kingdom. Pharmaceuticals from GM plants must adhere to the same regulations as all other biotechnology-derived drugs. These guidelines are largely specified by the European Commission in Directive 2001/83/EC and Regulation No 726/2004. However, upstream production in plants must also comply with additional statutes. Cultivation of GM plants in the field constitutes an environmental release and has been regulated by the EC under Directive 2001/18/EC and 1829/2003/EC if the crop can be used as food/feed . The production of PMPs using whole plants in greenhouses or cell cultures in bioreactors is regulated by the “Contained Use” Directive 2009/41/EC, which are far less stringent than an environmental release and do not necessitate a fully-fledged environmental risk assessment. Essentially, the manufacturing site is licensed for contained use and production proceeds in a similar manner as a conventional facility using microbial or mammalian cells as the production platform. With respect to GMP compliance, the major differentiator between the regulation of PMP products and the same or similar products manufactured using other platforms is the upstream production process. This is because many of the DSP techniques are product-dependent and, therefore, similar regardless of the platform, including most of the DSP equipment, with which regulatory agencies are already familiar. Of course, the APIs themselves must be fully characterized and shown to meet designated criteria in their specification, but this applies to all products regardless of source.During a health emergency, such as the COVID-19 pandemic, regulatory agencies worldwide have re-assessed guidelines and restructured their requirements to enable the accelerated review of clinical study proposals, to facilitate clinical studies of safety and efficacy, and to expedite the manufacturing and deployment of re-purposed approved drugs as well as novel products . These revised regulatory procedures could be implemented again in future emergency situations. It is also possible that some of the streamlined procedures that can expedite product development and regulatory review and approval will remain in place even in the absence of a health emergency, permanently eliminating certain redundancies and bureaucratic requirements. Changes in the United States and European regulatory processes are highlighted, with a cautionary note that these modified procedures are subject to constant review and revision to reflect an evolving public health situation.In the spring of 2020, the FDA established a special emergency program for candidate diagnostics, vaccines, and therapies for SARS-CoV-2 and COVID-19. The Coronavirus Treatment Acceleration Program 5 aims to utilize every available method to move new treatments to patients in need as quickly as possible, while simultaneously assessing the safety and efficacy of new modes of intervention. As of September 2020, CTAP was overseeing more than 300 active clinical trials for new treatments and was reviewing nearly 600 preclinical-stage programs for new medical interventions. Responding to pressure for procedural streamlining and rapid response, the FDA refocused staff priorities, modified its guidelines to fit emergency situations, and achieved a remarkable set of benchmarks . In comparison to the review and response timelines described in the previous section, the FDA’s emergency response structure within CTAP is exemplary and, as noted, these changes have successfully enabled the rapid evaluation of hundreds of new diagnostics and candidate vaccine and therapeutic products.

There are no recommendations made regarding substance use-related visits given limited evidence

The rise in substance use related ED visits was driven by sedatives, stimulants, and hallucinogens, with alcohol and other substance use-related visits being relatively stable .There was a parallel increase in mental health-related visits, with these visits making up 2.34% of total ED visits in 2013 and 3.88% in 2018, representing a 66% relative increase. Among substance-use related visits, the 25-44 age group made up 44.58% of visits, as compared to 35.49% of the non-substance related group . There was also a male predominance among substance use-related visits: males accounted for 63.38% of visits in the substance group vs 41.74% in the reference group . While the West geographic area accounted for only 21.34% of all ED visits, it made up 29.67% of substance use-related visits. In addition, substance use-related visits were much more likely to happen during the night shift , with 27.07% of all substance use-related visits taking place then compared to 14.81% in the reference group . Mental health issues were more prevalent in the substance use group compared to the reference group, present in 14.48% vs 2.99%, respectively. With regard to the primary outcomes, patients associated with substance use-related visits were more likely to undergo any diagnostic study and toxicology screening ; however, they were less likely to have imaging studies . There were no significant differences in the use of medications or procedures between the substance use and reference groups, with the differences in means being 0.08 and 0.04 , respectively . Substance use-related visits were associated with higher odds of admission or transfer to another facility and higher odds of receiving a mental health consult [aOR 5.70; 95% CI: 4.47-7.28; P <0.0001. With regard to stratified analyses those patients with mental health disorders were more likely to have imaging studies,vertical farming system and this reached statistical significance for interaction .

For substance use-related visits without the concurrent presence of a mental health disorder, the aOR of undergoing any imaging study was 0.65 , and for substance use-related visits with concurrent mental health disorder, the aOR of undergoing any imaging study was 1.44 . All substance use-related ED visits were more likely to undergo toxicology screening, but those without concurrent mental health disorders were even more likely to receive screening, with aOR of 11.47 . The presence of a mental health disorder did not have an impact on the relationship between undergoing any diagnostic study in ED and substance use .Consistent with previously published work, our study shows that sedative-, stimulant-, and hallucinogen- related ED visits continue to increase rapidly compared to alcohol and other substances of abuse.Substance use-related ED visits are more likely to result in diagnostic investigations overall, admission or transfer to another facility, and mental health consultations. Conversely, they are less likely to result in imaging studies. While the higher rate of admission/transfer and mental health consultations for substance use- related ED visits has been reported previously,to our knowledge the use of diagnostic services has not yet been assessed at the national level. Among the common substances of abuse, the rapid increase in stimulant-related ED visits in recent years is remarkable; in 2018, the percentage of stimulant-related visits matched that of sedative-related visits , representing approximately 0.7% of total ED visits. This is consistent with other study findings that have reported a rise in prevalence of stimulant use across all age groups from 2010–2014, with adults between 20-64 years the most affected.Our study also showed that the rise in stimulant-related visits was more pronounced in the 18-44 age group , compared to the > 45 years age group . The most frequently cited motivation for stimulant use among adults was performance enhancement,which supports the need to improve public education for young adults on the addictive potential of stimulants and restricting prescriptions to appropriate clinical indications only. Regarding the use of diagnostic services in the ED for substance use-related visits, research has been relatively sparse. Our study showed that substance use-related visits are more likely to receive diagnostic services overall and toxicology screening.

Some studies have called into question the routine practice of ordering urine drug screens for substance-related visits and laboratory studies in general for mental health-related visits, as they have rarely led to changes in management.The American Psychiatric Association and the American College of Emergency Physicians both support targeted diagnostic investigations for patients presenting with acute psychiatric symptoms, instead of routine testing.However, drug testing is often required as part of initial assessment to enter treatment facilities, regardless of medical indication or emergency healthcare team preferences.Although most of the studies on this topic focused on mental health-related ED visits, the often-overlapping presentations of substance- and mental health-related visits argue for standardization of practices to diagnostic services. In terms of the use of imaging studies specifically, both ACEP and the APA support individual assessment of risk factors to guide brain imaging in the ED for mental health-related visits, due to low yield of routine imaging.In contrast to our finding of substance use-related visits being associated with less use of imaging studies, previous work has shown a rising trend in the use of CTalong with the rise of opioid-related visits.However, that study did not assess the use of CT in relation to a non-substance use reference group and did not include other imaging modalities. The lower rate of utilization of imaging studies could be explained by the possibility that imaging was not needed for management or disposition after completion of laboratory screening in substance use-related visits. In addition, since substance use-related visits occurred disproportionately after hours, imaging might not be readily available after hours in smaller centers. Visiting hours were adjusted for as a potential confounder; so the latter explanation is considered less likely. Notably, the presence of a mental health disorder made it more likely for patients with a substance use diagnosis to undergo imaging studies. It is well documented that patients with serious mental health disorders have higher mortality rates than those without, attributable to both injuries and chronic diseases.It is, therefore, possible that additional imaging studies were needed because of increased medical complexity.

Furthermore, the presence of SUDs was associated with significantly increased rates of mental health consultations in the ED, which in turn have been shown to be associated with increased ED length of stay.These findings support the fact that healthcare is more costly for patients with mental health or SUDs, highlighting the need to address physical and mental health in an integrated fashion.In fact, multiple studies have shown the effectiveness of case coordination and combined medical and behavioral health clinics to help decrease substance use- or mental health-related ED visits.Our study results should be interpreted in the context of several limitations. First, only associations and no causal relationships could be made due to the cross-sectional nature of the study. Second, it is possible that some substance use related ED visits represented repeated visits over time, meaning the statistical methods used in the analysis could yield biased results away from the null. As the NHAMCS is an event-level database, it is not possible to ascertain this as data linkage could not be performed. Third, the study results relied heavily on ED reporting and ICD codes, which could be subject to inaccuracies and bias the results toward the null, although steps were taken to mitigate this through staff training. Fourth, due to limitations in sample size, detailed analysis on the specific types of diagnostic services or imaging modalities, with the exception of toxicology screening, were not done. Further studies incorporating data from previous years would be needed to obtain more granular data. Fifth, due to concerns about multiplicity, resource utilization pattern with respect to the subgroups of substances analyzed can only be used for hypothesis-generating purposes. Furthermore, improved screening strategies for substance use in the ED could have contributed to the increase in visits,indoor grow facility following the emergence of evidence demonstrating improved outcomes associated with ED initiated interventions, biasing the results away from the null.Finally, this study did not include information on ED-initiated substance use treatment or outpatient referral pattern over time, making it difficult to comment on specific strategies to help improve care for patients with SUD in the ED. In summary, many of the limitations arose from the design of the survey itself and were difficult to mitigate at the data analysis stage. Marijuana and tobacco co-use is common among young adults . On average, young adults perceive marijuana as less harmful to health, less addictive, and more socially acceptable than tobacco , and are less ready to quit marijuana than cigarettes . While a few studies have found that marijuana users were less likely to quit smoking than non-users , others have found no significant differences in smoking outcomes between marijuana co-users and non-marijuana users . Previous research focused on general adult populations, collected data in-person, and was conducted before the advent of widespread changes in marijuana legalization and social norms . It is unclear whether and to what extent marijuana use interferes with smoking cessation and related outcomes among young adults in an era of rapidly shifting laws and attitudes regarding marijuana. It is particularly important to study young adults in this context, because they are less likely to seek smoking cessation treatment and are more likely to use marijuana than are older adults. Moreover, due to the stigma around marijuana use and its illegal status in many states, collecting data online may be a useful strategy to improve accuracy of self-reported marijuana use and to further examine its relationship with smoking cessation. Lastly, marijuana use has become increasingly accepted in society and increasingly common among cigarette smokers .

Given the widespread availability and acceptability of marijuana among young adults, current tobacco smokers may experience more difficulty quitting than those surveyed in previous decades. As such, this study uses data from a randomized controlled trial of the Tobacco Status Project , a smoking cessation intervention for young adults delivered on Facebook, to examine differences in smoking outcomes between marijuana users and non-marijuana users. Participants were young adult smokers who reported smoking 100+ cigarettes in their lifetime, currently smoking 1+ cigarettes per day 3+ days per week and using Facebook 4+ days per week, and who were English literate. Recruitment consisted of a paid Facebook ad campaign from October 2014 to July 2015 . Clicking on an ad redirected participants to a confidential eligibility survey. Eligible, consented participants were randomly assigned to one of two conditions: 1) the Tobacco Status Project intervention, or 2) referral to the National Cancer Institute’s Smoke free.gov website . Participants in both conditions were included in all analyses except treatment engagement and perceptions . TSP included assignment to a private Facebook group tailored to participants’ readiness to quit smoking, daily Facebook contact with study staff, weekly live counseling sessions, and six additional Cognitive Behavioral Therapy counseling sessions for those ready to quit. Study staff posted once a day for 90 days and participants were asked to comment on the posts. Post content varied by readiness to quit smoking and included strategies informed by the Transtheoretical Model and the U.S. Clinical Practice Guidelines for smoking cessation . Participants were emailed follow-up surveys at 3, 6, and 12 months after the study began. This research was approved by the University of California, San Francisco Institutional Review Board. Nicotine dependence was assessed using the 6-item Fagerström Test of Cigarette Dependence , scored on a scale of 0 to 10, from low to heavy dependence. Daily smoking at baseline was measured with the item, “On average, how many days in a week do you smoke cigarettes ?”. Responses were recoded into daily smoking or non-daily smoking . The Smoking History Questionnaire assessed early smoking as well as usual number of cigarettes smoked per day. The Stages of Change Questionnaire was used to categorize participants into one of three stages of change based on their readiness to quit smoking at baseline. Alcohol is another substance commonly used by young adults, and use of alcohol can co-occur with tobacco and/or marijuana . Hence, we measured alcohol use for possible inclusion as a covariate in the models, using the item, “Have you consumed alcohol in the past 30 days?” .Current marijuana use was measured at each time point using the Staging Health Risk Assessment , based on the Transtheoretical Model stages of change and the Healthy People 2020 goals for the United States .

School enrollment characteristics were not related to the presence of marijuana comarketing

That trial is a measure of executive function requiring inhibition of a prepotent interfering behavioral response. This effect could not be attributed to differences in HIV-related clinical factors. There were no differences between HIV+ women and men with and without depression in tests of psychomotor speed/attention and motor skills. The domain that was most vulnerable among HIV+ depressed women was a measure of executive function that relies on select areas of the cognitive control network , in particular the rostral anterior cingulate cortex and the dorsolateral PFC which are invoked during inhibitory tasks such as Stroop interference42,43. Neurobiological features of depression contributing to cognition include glucose metabolism in the PFC44 and functional alterations of the ACC, during cognitive task performance. An eventrelated functional magnetic resonance imaging study involving an in-scanner version of the Stroop revealed hyperactivity in the rostral ACC and left dorsolateral PFC in patients with unipolar depression versus healthy participants, and those alterations in brain function correlated with Stroop infererence. This pattern of regional hyperactivity can be induced by lowering serotonin levels with tryptophan depletion, and can be reversed with the antidepressant escitalopram. Although causality cannot be determined in the present study, other work suggests that decreased levels of serotonin alter ACC and PFC function to influence performance on inhibitory tasks. These functional brain alterations partially overlap with the HIV-associated alterations in brain circuitry. Multiple neurobiological features of HIV infection,drain trays for plants including chronic neuroinflammation, reduction of trophic factors, and alterations in dopamine and other neurotransmitters can contribute to depression in HIV.

Mechanistically, neuroinflammation and impaired neurogenesis are key features of depression and HIV and are contributors to NCI. Similarly, hypothalamic-pituitaryadrenal axis function alterations can contribute to NCI in depression and HIV. In our previous publication using this same sample, we demonstrated that although HIV+ women show cognitive vulnerabilities in several domains versus HIV+ men , they show no vulnerability in Stroop. The current data show that it is only in the context of depression where they show greater vulnerability on Stroop colorword [interference], a task reliant on the CCN compared to depressed HIV+ men as well as depressed HIV- men and women. Biological explanations for this selective vulnerability may include females greater sensitivity to the negative effects of inflammation-induced depressed mood. Inducing inflammation via endotoxin exposure leads to increased depressed mood and neural activity in the ACC in healthy females but not males. Converging evidence from preclinical models also demonstrate that the adult female brain has more microglia with an activated phenotype versus the male brain. Microglia play a critical role in maintaining homeostasis in the presence of a number of factors including infection or injury. Sexual dimorphisms in genetic variations in the dopaminergic system may also contribute to a female-specific vulnerability in cognitive control. The catechol-O-methyltransferase gene and the dopamine receptor D2 gene interact with sex on cognitive control behavioral measures. Transcriptional signatures in brain regions in the CCN in MDD also differ by sex. Lastly, sex differences in the HPA, and/or immune alterations may contribute to these findings. For example, cortisol levels negatively relate to executive function in HIV- women but not men. The tighter coupling of depression and HIV in women compared to men suggests a tighter coupling of these neural manifestations of HIV and depression in women than men, and consequently might explain the greater cognitive effect of these comorbidities in women than men. There are also non-biological explanations for the decreased executive function among HIV + depressed women versus all other groups. Depressed HIV+ men could have had greater access and availability to mental health services versus depressed HIV+ women, and this treatment may have minimized the cognitive sequelae of depression in men.

That explanation does not, however, account for the specificity of findings to Stroop color-word [interference] but not other tests. Second, depression among female HIV positive individuals may have the greatest adverse effects on cognitively demanding tasks regardless of domain. Of the tasks administered, Stroop color-word [interference] was the most difficult. Third, we used the same CES-D cutoff for men and women though some argue in favor of a lower cut-off for men than women. Whether a different pattern of findings would emerge with sex-specific cutoffs is unknown. Lastly, performance on Stroop color-word [interference] and possibly other outcomes may have been influenced by unusual patterns within the HIV- depressed men who showed lower performance than HIV+ depressed men in several tests . Even if these patterns did not lead to emergence of any other three-way HIV-serostatus X Sex X Depression interactions, they may have led to the lack of two-way HIV-serostatus X Depression interactions. HIV- depressed men were more likely than HIV+ depressed men to be heavy alcohol users, smoke, and use cannabis and cocaine/crack, but those factors did not account for the three-way interaction on Stroop color-word [interference]. HIV+ depressed men may also have had better engagement in care due to their HIV status versus HIV- depressed men. We also found that elevated depression regardless of HIV status or biological sex was negatively associated with psychomotor speed/attention, executive function, and motor skills. Findings are consistent with studies in HIV- individuals demonstrating that primary NCI among depressed individuals are in psychomotor speed/attention and executive function; sex differences were not examined. In HIV, similar patterns are seen among mixed samples of HIV+ and HIVindividuals . Overall, MACS men compared to WIHS women were more likely to report ever being depressed. Furthermore, HIV serostatus was associated with higher depression rates in women while in men depression rates did not differ by HIV-serostatus. This finding seems unexpected because the depression rate is twice as high in women than men. Similarly in the few studies of sex differences in depression among PWH, HIV+ women have higher depression rates and more severe depressive symptoms versus HIV+ men.

In most studies, the sample sizes were smaller than in the present study so this study might provide more reliable estimates. However, men in the present study, had more opportunities to develop depression because they were followed for a longer period of time versus women . When restricting our analysis to crack/ cocaine non-users, men still had higher levels of depression versus women despite having fewer visits than women. A likely explanation for the higher frequency of depression in MACS men includes primarily sexual minority men whereas WIHS includes primarily heterosexual women. In both sexes, the prevalence of depression is higher among sexual minorities versus heterosexuals. The high prevalence of depression in sexual minorities is associated with stress exposure resulting from stigma and lack of social support. In the MACS, men are predominately Black and all are gay or bisexual. Notably, even though depression was more frequent among HIV+ men, the increased frequency among HIV+ men did not increase NCI on any domain versus either depressed HIV+ women or HIV- men. Moreover, accounting for HIV RNA which was higher in depressed HIV+ men than non-depressed HIV+ men did not not account for the pattern of NCI correlates. This study has a number of limitations including the limited cognitive battery , unmeasured confounders ,4 x 8 grow tray and use of a self-report measure of depression. The preferred diagnostic interview to assess depression was unavailable in both cohorts. Additionally, we did not assess other diagnostic comorbidities commonly cooccuring with depression including anxiety and substance use disorders . Finally, while there were differences in the data collection time frame in the two cohorts, it is unlikely that these differences led to a bias towards or against visits completed while a participant was depressed as depressive symptom trajectories are relatively stable in individuals. Despite limitations, few studies have sufficient statistical power to examine whether the depression-NCI associations differ by HIV-serostatus and sex. To our knowledge, this is the largest study in PWH examining sex and depressive symptoms as contributors to NCI in PWH. The importance of this topic is evident in the high frequency of depression and in the finding that overall depression is associated with impairment in psychomotor speed, executive function, and motor function. Focusing on sex differences is important because for women, the association between depression and executive function was particularly strong, increasing the odds of impairment 5-fold. This pattern was the case even though depression rates were higher in men regardless of HIV-serostatus. Findings indicate that depression is an important prevention and treatment target and that improved access to psychiatric and psychological services may help minimize the influence of this comorbidity on NCI. More high school students smoked little cigars and cigarillos than cigarettes in 33 US states in 2015. Concern is growing about co-use of tobacco and marijuana among youth, particularly among African-American youth.In a 2015 survey, for example, one in four Florida high school students reported ever using cigars or cigar wraps to smoke marijuana. One colloquial term for this is a “blunt.” Adolescent cigar smokers were almost ten times more likely than adults to report that their usual brand offers a flavored variety. Since the US ban on flavored cigarettes , the number of unique LCC flavors more than doubled. Anticipating further regulation, the industry increasingly markets flavored LCCs with sensory and other descriptors that are not recognizable tastes.For example, after New York City prohibited the sale of flavored cigars, blueberry and strawberry cigarillos were marketed as blue and pink, but contained the same flavor ingredients as prohibited products. Among the proliferation of such “concept” flavors , anecdotal evidence suggests that references to marijuana are evident. Cigar marketing includes the colloquial term, “blunt”, in brand names and product labels . Other marketing techniques imply that some brands of cigarillos make it easier for users to replace the contents with marijuana.For example, the image of a zipper on the packaging for Splitarillos and claims about “EZ roll” suggest that products are easily manipulated for making blunts.

We use the term “marijuana co-marketing” to refer to such tobacco industry marketing that may promote dual use of tobacco and marijuana and concurrent use . In addition to flavoring, low prices for LCCs also likely increase their appeal to youth. In California, 74% of licensed tobacco retailers sold cigarillos for less than $1 in 2013. Before Boston regulated cigar pack size and price in 2012, the median price for a popular brand of grape-flavored cigars was $1.19. In 2012, 78% of US tobacco retailers sold single cigarillos, which suggests that the problem of cheap, combustible tobacco is widespread. Additionally, the magnitude of the problem is worse in some neighborhoods than others. Popular brands of flavored cigarillos cost significantly less in Washington DC block groups with a higher proportion of African Americans and in California census tracts with lower median household income.For the first time, this study examines neighborhood variation in the maximum pack size of cigarillos priced at $1 or less and assesses the prevalence of marijuana co-marketing in the retail environment for tobacco. School neighborhoods are the focus of this research because 78% of USA teens attend school within walking distance of a tobacco retailer. In addition, emerging research suggests that adolescents’ exposure to retail marketing is associated with greater curiosity about smoking cigars and higher odds of ever smoking blunts. The Table summarizes descriptive statistics for store type and for schools as well as mixed models with these covariates. Nearly half of the LCC retailers near schools were convenience stores with or without gasoline/petrol. Overall, 61.5% of LCC retailers near schools contained at least one type of marijuana co-marketing: 53.2% sold blunt wraps, 27.2% sold cigarillos marketed as blunts and 26.0% sold blunt wraps, blunts or other LCC with a marijuana related “concept” flavor. After adjusting for store type, marijuana co-marketing was more prevalent in school neighborhoods with lower median household income and with a higher proportion of school-age youth. Nearly all LCC retailers sold cigarillos for $1 or less. The largest pack size at that price contained 2 cigarillos on average . The largest packs priced at $1 or less were singles in 10.9% of stores, 2-packs in 46.8%, 3-packs in 19.2%, 4-packs in 5.5%, and 5 or 6 cigarillos in 5.5%. After adjusting for store type, a significantly larger pack size of cigarillos was priced at $1 or less in school neighborhoods with lower median household income and near schools with a lower proportion of Hispanic students .In California, 79% of licensed tobacco retailers near public schools sold LCCs and approximately 6 in 10 of these LCC retailers sold cigar products labeled as blunts or blunt wraps or sold cigar products with a marijuana-related flavor descriptor.

No protocol or prespecified analysis plan was registered for this study

PWID who experience homelessness are subject to additional structural and environmental barriers—such as poverty and exposure to violence—that amplify the IDU-related harms they face . Further, among PWID, experiencing homelessness is associated with an additional elevated risk of acquiring HIV and hepatitis C . While preventing injection-naïve individuals from transitioning into IDU has long been a public health goal , better characterizing the role of homelessness in transitions into IDU could directly inform strategies to respond to some of the upstream drivers of IDU-related morbidity and mortality. Recent research has highlighted the key role of experienced PWID in assisting injection naïve individuals initiating IDU . Across study samples, between 75%–95% of PWID reported that their IDU initiation was assisted by established PWID . While research has demonstrated that experiencing an episode of homelessness in the past six months increases the risk that injection-naïve individuals initiate IDU , there is a lack of research concerning the relationship between recent homelessness and the provision of IDU initiation assistance among PWID. In fact, prior studies of IDU initiation assistance have operationalized recent homelessness or housing status as a covariate to be controlled for in subsequent analyses, rather than as a critical factor in and of itself . In the present study, we therefore assessed the association between recent homelessness and providing IDU initiation assistance among PWID from two cities in North America .Preventing Injecting by Modifying Existing Responses is a multi-cohort, multicountry, plant benches mixed-methods study with a primary aim of identifying socio-structural factors that influence the likelihood that PWID help injection-naïve individuals inject for the first time . For this study, data were drawn from four PRIMER-affiliated longitudinal cohort studies in Tijuana, Mexico and Vancouver, Canada. In Tijuana, PRIMER was conducted within the Proyecto El Cuete cohort study .

For ECIV, at baseline, all participants were at least 18 years old, had reported IDU in the prior month, spoke at least Spanish or English, were residing in Tijuana with no plans to relocate, and were not participating in any other intervention studies . In Vancouver, data were collected within three ongoing cohort studies: the At-Risk Youth Study ; the AIDS Care Cohort to Evaluate exposure to Survival Services study; and the Vancouver Injection Drug Users Study . For ARYS, recruited participants were between the ages of 14 and 26, reported illicit drug use in the past month, and reported recently being homeless or accessing services intended for homeless youth at baseline . For ACCESS, recruited participants were at least 18 years old, HIV seropositive, and reported illicit drug use at baseline . For VIDUS, recruited participants were at least 18 years old, HIV seronegative, and reported IDU on at least one occasion in the past month at enrolment. At recruitment and semiannually thereafter, all participants of these PRIMER-affiliated cohort studies completed interviewer-administered questionnaires that capture participant-reported information on socio-demographic characteristics and drug use behaviors. Starting in late 2014, corresponding cohort questionnaires were amended under PRIMER to add survey items concerning participants’ experiences with providing injection initiation assistance to others. The first interview completed by a participant involving the PRIMER items on injection initiation assistance is referred to as that participant’s PRIMER baseline interview . The present study includes data collected on ECIV and ARYS/ ACCESS/VIDUS participants from 2014 to 2017. The PRIMER study was approved by the Institutional Review Board of the University of California San Diego . It is also important to highlight that the dynamics of homelessness and IDU are different across these two sites. While there are challenges in estimating the number of people who experience homelessness, estimates indicate that, at minimum, several thousand individuals experience homelessness each year in both Tijuana and Vancouver .

In Vancouver, homelessness and IDU are concentrated and highly visible in the Downtown Eastside neighborhood . This centralization reduces barriers to recruiting and providing resources to PWID. Whereas, in Tijuana, homelessness is more dispersed and encampments that do arise are frequently subject to law enforcement interaction . As such, our study reflects on the relationship between homelessness and IDU initiation assistance provision across two heterogenous settings, expanding the potential generalizability of our findings. Our study was restricted to members of the ECIV and ARYS/ACCESS/VIDUS cohorts who: 1) completed a PRIMER baseline interview within the study window; 2) reported a history of IDU at baseline; and 3) completed at least one follow-up visit six months after baseline. Eligible participants contributed a minimum of 1 and a maximum of 5 follow-up visits. If a participant had missing baseline data for any time-varying measure , then baseline was redefined to be that participant’s first subsequent visit with complete data. All subsequent PRIMER follow-up visits within the study period with complete data for a participant were included. If a participant had missing data for a follow-up visit, then that follow-up visit and all subsequent follow-up visits for that participant were excluded from the analysis. The outcome of interest was recent provision of IDU initiation assistance . To operationalize this measure, participants were asked if they had helped an injection-naïve individual inject for the first time in the past six months. This question is intended to capture participants’ recent experiences with direct assistance and/or indirect assistance . The exposure of interest was recent homelessness , defined via self-report as experiencing an episode of homelessness in the past six months. Due to differences in the cohort questionnaires by setting, the self-reported exposure was measured differently for participants from Tijuana and Vancouver. In Tijuana, participants were given a set of locations and asked to mark all the places they have lived or slept in the past six months.

Participants that reported having lived or slept in their workplace, in a vehicle, in an abandoned building, in a shelter, on the streets, or in a shooting gallery in the past six months were deemed to have recently experienced homelessness. In Vancouver, participants were asked a single yes/no question: “Have you been homeless in the last six months?” with those responding “yes” deemed to have recently experienced homelessness. Both the exposure and outcome were repeatedly assessed at each visit over follow-up. We identified a set of covariates a priori that might confound the relationship between our exposure and outcome of interest based on prior literature. The set consists of both baseline-fixed and time-varying covariates. Baseline-fixed covariates included: age , gender , and cohort . Time-varying covariates included: whether participants reported being stopped by law enforcement in the past six months ; whether participants reported being incarcerated in the past six months ; whether participants reported IDU in the past six months ; and for Vancouver participants only, neighborhood of residence . Excluding the baseline-fixed covariates, values of all other variables were allowed to vary over time to reduce misclassification bias. Prior to any analyses, time-varying covariate values at a given followup visit t were recoded to their corresponding value at visit t-1 occurring six months earlier. This lagging was done to ensure that covariate measurement always preceded both exposure and outcome measurement at the same visit. Due both to differences in underlying study design and how the exposure was defined between the Tijuana and Vancouver cohorts ,gardening rack all analyses described herein were undertaken separately by setting. Given our interest in estimating the effect of recent homelessness on recent provision of injection initiation assistance , it is important to note that traditional regression-based approaches to control for measured confounding may yield a biased effect estimate when the set of covariates includes time-varying variables that are caused by prior exposure and also influence subsequent exposure and outcome values . This consideration is relevant to our study, as we have measured several time-varying covariates that may satisfy these criteria; for example, recent IDU may be both a consequence of prior homelessness and a confounder of subsequent homelessness and providing injection initiation assistance). Alternatively, an unbiased estimate of the effect of interest can be obtained from an inverse-probability-of-treatment-weighted marginal structural model, which accounts for baseline-fixed and time-varying confounding via weights . Estimation of this marginal structural model requires two steps: first, we calculate stabilized inverse probability-of-treatment weights for each person-visit occurring after baseline to account for confounding; and, second, we fit a generalized estimating equations logistic regression model to the weighted sample to estimate the parameters of a marginal structural model. IPTWs are calculated in order to evenly distribute potential confounders across the different treatment groups – the application of the weights to the study sample generates an artificially balanced pseudo-sample in which recent homelessness status is independent of all measured confounders . The IPTW approach is particularly appropriate given that it can effectively account for confounding caused by time-varying measures in longitudinal analyses . See the Supplemental Materials for full details on the calculation of IPTWs. Next, we fit a GEE logistic model to the inverse-probability-of-treatment-weighted sample with our repeated measures outcome regressed on terms for exposure , time , and all baseline-fixed covariates . A first-order autoregressive working correlation matrix was specified to account for repeated measures within participants – meaning that the model assumed that a participant’s outcome response at follow-up visit t was correlated with their outcome response at visit t-1 .

Assuming the absence of model misspecification, unmeasured confounding, and informative censoring, the inverse-probability-of-treatment-weighted GEE coefficient estimates estimate the corresponding causal parameters of a marginal structural model . In other words, under these assumptions, the exponentiated exposure coefficient estimate from our weighted model – which is an adjusted odds ratio – may be interpreted as the relative effect of recent homelessness on a participant’s odds of providing injection initiation assistance over the same six-month period . Corresponding 95% confidence intervals were calculated for effect estimates using robust sandwich-type standard errors with clustering by participant. We performed two sensitivity analyses to assess the influence of measured confounding on our estimates of the association between recently experiencing homelessness and recently providing IDU initiation assistance: first, to assess the influence of measured time-varying confounding, we ran the GEE logistic model as described above without the IPTWs; second, to assess the influence of measured time varying and baseline-fixed confounding, we ran the GEE logistic model as described above without the IPTWs and without adjusting for baseline-fixed covariates. We identified 703 eligible participants in Tijuana and 1551 eligible participants in Vancouver . At baseline, 12.5% of participants in Tijuana and 23.3% of participants in Vancouver reported experiencing homelessness in the past six months. Individuals in Vancouver who reported recent homelessness at baseline were younger on average than those who had not . In both Tijuana and Vancouver, individuals who reported recent homelessness at baseline reported higher prevalence of being stopped by police in the past six months. In Vancouver, 21.0% of those who recently experienced homelessness at baseline reported recent incarceration versus just 3.0% of those who did not report recent homelessness. In Tijuana, past six-month IDU was more prevalent among those reporting recent homelessness at baseline than those who did not . The same was true in Vancouver, where 80.9% of recently homelessness participants reported IDU in the past six months compared to 58.9% of participants who did not report recent homelessness. A higher proportion of those reporting recent homelessness at baseline also reported recently providing IDU initiation assistance in both Tijuana and Vancouver . The median number of follow-up visits was 5 in Tijuana and 4 in Vancouver . At a given follow-up visit, between 11.6% and 16.5% of participants in Tijuana and between 9.4% and 18.9% of participants in Vancouver reported experiencing homelessness in the past six months. Between 3.3% and 5.4% of participants in Tijuana and 2.5% and 4.1% of participants in Vancouver reported recently assisting an IDU initiation at each follow-up visit. In Tijuana and Vancouver, respectively, 79 and 150 participants reported assisting at least one IDU initiation across the study period, with 19 and 28 of these participants reporting recent injection initiation assistance provision at multiple follow-up visits. In Tijuana, at a given follow-up visit, between 12.5% and 30.4% of participants who reported recently assisting a first-time injection also reported recent homelessness during the same six-month period. In Vancouver, through the first 4 follow-up visits between 18.4% and 36.5% of participants reporting recently assisting IDU initiation also reported recent homelessness, though this fell to 5.6% at the 5th follow-up.

All measures were pilot tested with adolescents of the same age and demographics of our sample

There is also concern that using both marijuana and tobacco at the same time can reinforce the rewarding effects of both substances . Using a sample of 9th and 12th grade students recruited from California schools, this study addresses important gaps in the literature by first reporting adolescents’ rates and patterns of use of and access to marijuana, blunts, and cigarettes. Second, this study examines and compares adolescents’ perceived prevalence, social acceptability, and risks and benefits of marijuana, blunts, and cigarettes. Lastly, this paper assesses to what extent these factors are associated with actual use of marijuana. Such information is important in order to inform the creation of better education and warning messages, especially as marijuana and blunt use increases in popularity and moves from an illicit drug to a legal drug for recreational use .This study utilized a convenience sample, in which we recruited participants from 10 large high schools throughout California. These schools were diverse with respect to geographic location , race/ethnicity, and socioeconomic status ; and were schools that were willing to participate in the study. Researchers introduced the study and invited all ninth and 12th graders to participate, during which time they provided students with consent forms for parents and students 18 and over, assent forms for students under age 18, and project information to take home and discuss with their parents/guardians. Approximately 4,000 students learned about the study, of whom 1,299 returned signed consent/assent forms; 405 of the consented students were disqualified from the study because of incorrect contact information, being in the wrong grade,vertical growing systems or non-response to subsequent contact. Overall, 786 of eligible consented students completed the survey. There were some small but non-meaningful racial/ethnic differences between those who did and did not complete the survey; however, there were no differences by mother’s education.

The sample size was designed to allow sufficient power to detect the contrasts of interest. The sample included 484 females and 281 males; mean age = 16.1 . Participants were ethnically diverse, with 207 White, 171 Asian/Pacific Islander, 232 Hispanic, and 168 other. Demographics of the students who participated in the study reflected the demographic make-up of their respective schools. The survey included 125 questions addressing a number of research questions; and took participants between 30 and 60 minutes to complete. Participants were allowed as much time as they wanted to complete the survey, although they were encouraged to complete the survey at one time to increase confidentiality of their responses. Only those measures related to the current study are reported here. Comprehensive results regarding the cigarette use data can be found in Roditis et al. . Many measures were derived from past surveys on adolescents’ attitudes towards substances, including those that have tested the validity of the assessments . Participants indicated items that were not clear, and then we revised the survey and pilot tested it again until all measures were clear. Most items were continuously scored; the few that were dichotomized are noted below. Differences in perceptions of risks and benefits and social norms across products were assessed using a generalized linear model with the generalized estimating equation method and an exchangeable correlation matrix to adjust the variance estimates for non-independence within school as implemented in Proc Genmod of SAS, v94. Post hoc testing utilized Tukey-Kramer tests. The relationship among marijuana use, perceptions of social norms, risks and benefits, and viewing of ads on social media was assessed using logistic regression. The outcome variable, marijuana use, was coded into 2 categories of never used and ever used. Predictor variables included: perceived prevalence variables, perceived risk and benefit variables factor analyzed into the following categories: health and social risks, benefits, and risk of addiction, and awareness of social media attitudes and beliefs related to marijuana. Age, sex, and race/ethnicity were also included in the model; however, interactions with sex and race/ethnicity were not significant and therefore were removed in the final model.

Missing data, which was negligible and varied item to item, were left missing. SPSS version 23 was used for the descriptive analyses.There were significant differences in participants’ reports of mother, father, sibling, and friend use of these products. Participants reported lower rates of marijuana and blunt use and higher rates of cigarettes use among adult figures in their lives. Conversely, participants reported much higher rates of marijuana than cigarette use among friends . They perceived significant differences in rates of use among peers, reporting that 50.92% of their peers had ever used marijuana, 42.63% had ever used blunts and 34.43% had ever used cigarettes. Participants viewed marijuana and blunts as more socially acceptable than cigarettes .Participants rated cigarettes as being overall more harmful to their health, more harmful to their friends’ health, more harmful to the environment, and more addictive than marijuana or blunts . Post-hoc analyses showed that participants perceived marijuana as more harmful to the environment than blunts, and perceived blunts as more likely to lead to addiction than marijuana. Participants viewed marijuana and blunts as similarly risky when it comes to their and their friends’ health . Generally, participants rated marijuana and blunt use as less likely to result in short-term health risks than cigarettes, with post-hoc analyses showing that they viewed marijuana and blunts as similarly risky. Participants also rated marijuana and blunt use as less likely than cigarettes to result in the short-term social risks of friends getting upset and bad breath. Participants reported no difference in the likelihood of getting in trouble from using marijuana, blunts, or cigarettes. Adolescents rated marijuana and blunts as more likely to confer social benefits of looking cool and fitting in than cigarettes, though they rated all products as equally likely to make them look mature. Participants rated marijuana and blunts as less likely to make them feel jittery or nervous, more likely to reduce stress, and more likely to make them feel high or buzzed than cigarettes. They rated all three products as equally likely to help with concentration. Marijuana and blunts were rated as less addictive, and easier to quit than cigarettes .

A similar number of participants reported seeing messages on social media about the risks and benefits related to marijuana use . Additionally, 34.4% reported seeing messages about risks related to blunt use and 28.6% reported seeing messages on benefits related to blunt use. A smaller number of participants reported actively posting online about these products, with 13% posting about the risks related to marijuana use, 10.9% posting about the benefits, 10.1% posting about the risks of blunt use, and 4.6% posting about the benefits of blunts . Use rates in this study were highest for marijuana, followed by blunts and cigarettes. Most adolescents who use these products get them from friends, use them in friends’ houses, and when they feel stressed. Adolescents perceived lower marijuana and blunt use but higher cigarette use among parents. Conversely, adolescents perceived higher use of marijuana and blunts and lower use of cigarettes among their siblings and peers. These differences in perceived use may reflect current trends in adolescent marijuana and cigarette use nationwide, in which rates of cigarette use is much lower than marijuana use, with cigarette use continuing to decline, marijuana use remaining higher , and rates of marijuana use being higher among adolescents and young adults compared to adults . While approximately a quarter of participants report having used marijuana, they thought that more than half of their friends have used marijuana. Importantly,grow trays participants who reported that their friends used marijuana had a 27% greater odds of using marijuana themselves. Previous studies also show relationships between friend drug use and adolescent drug use, and friend use is a powerful influence on adolescents’ social norms and acceptability of particular behaviors . The fact that participants report friend use rates of marijuana as double that of self-reported use may be reflective of changing social norms in which marijuana use is seen as an acceptable and common behavior, which, in turn, may be influencing decisions to use . Marijuana and blunts were generally perceived as more socially acceptable, less risky, and more beneficial than cigarettes. Despite the fact that blunts contain nicotine yet marijuana doesn’t, adolescents didn’t perceive differences in the likelihood of becoming addicted or being able to quit marijuana or blunts, although adolescents rated marijuana as more addictive than blunts. This is of particular importance, as it is possible that using both tobacco and marijuana together may actually increase the addictive potential of these products . While perceptions of benefits and addiction were not related to use in this study, perceptions of greater health and social risks were associated with lesser odds of using marijuana. Other studies have also found risk perceptions related to use . The fact that perceptions of benefits were not related to use is surprising as other studies have found perceptions of benefits to predict use .

It is possible that perceived social norms are more important drivers of adolescents’ decisions to use marijuana than perceived risks and benefits despite the fact that these constructs are linked . While perceptions of benefits of marijuana were not related to use, seeing messages about the good things or benefits of marijuana use was associated with a 6% greater odds of use. In contrast, despite adolescents seeing ads for both risks and benefits of marijuana, messages regarding risks were not related to use. It is possible that individuals who use marijuana are actively seeking and more aware of messages related to benefits of marijuana use. There are limitations to this study. The data are self-reported. Further, given the cross sectional nature of our data, we cannot suggest a causal relationship between factors associated with marijuana use and marijuana use itself. Additionally, some of the factors associated with marijuana use have a confidence interval approaching 1.0. Finally, these data were collected throughout Northern and Southern California and thus are not nationally representative. Despite these limitations, this is one of the few studies to assess perceptions of social norms, risks and benefits for marijuana, blunts, and cigarettes. Additionally, this study assessed how these factors as well as awareness of social media are related to marijuana use. Results from this study offer a number of important public health implications, particularly as states move towards legalization of marijuana for recreational use. As this occurs, states need to take adolescents’ perceptions of risks, benefits, social norms, and peer influences into account. Though there is mixed evidence on how legalization impacts adolescent marijuana use, advocates for marijuana legalization argue that legalization itself does not increase use among youth . However, there is no evidence that legalization alone does anything to decrease use or access among adolescents. The results from this study have a number of implications for prevention strategies. Perceived rates of marijuana use among friends is higher than participant self-reported use rates and reported national averages of adolescent use. This finding is similar to findings in the alcohol use literature, which finds that youth and young adults tend to overestimate rates of binge drinking. Importantly, dispelling this misperception has been used effectively in a number of social norms campaigns focused on reducing binge drinking in college campuses . This suggests that using a similar social norms marketing approach, in which youth learn that rates of marijuana use among peers are much lower than they think, may be a useful strategy to prevent use. In this study, both perceived friend use and having seen positive messages about marijuana was associated with greater odds that an adolescent used marijuana. These findings also suggest the need for marketing, education and intervention strategies that specifically tackle social acceptability and peer use. This study also shows that adolescents perceive marijuana and blunts to be significantly less harmful than cigarettes, despite the fact that all of these products are combustible smoking products. Additionally, despite the fact that blunts have nicotine, adolescents did not perceive these to be more addictive than marijuana. These findings suggest that there is also a need for educational and marketing campaigns that realistically address what the risks of marijuana and blunt use are for both youth and adults, including risks of addiction. National, state, and local public health agencies should consider lessons learned from regulatory and informational strategies that have been used in tobacco control, and should implement such strategies before legalization occurs .

The HTC Vive Pro Eye VR headset was used to enable VR capabilities and collect eye-related data

The percentage of dogs weighing <20 kg in this study is higher in comparison to that previously reported.All 37 dogs included in this study had PDH. Dogs with ADH or iatrogenic HChave excess glucocorticoids, and possibly other adrenal-origin steroids synthesized in a neoplastic adrenal cortex. Dogs with ADH or iatrogenic HC have low-to-undetectable concentrations of pituitary and hypothalamic hormones.Dogs with PDH also have excess glucocorticoids, but the HC is secondary to excess pituitary ACTH. Concentrations of pituitary ACTH and its’ prohormones above the reference range at diagnosis, remain above or increase further with adrenal directed medical treatments.Creatinine kinase activity results were above the reference range at diagnosis in the 75% of dogs in which it was measured. In both humans and dogs there are no data available about creatinine kinase concentrations above the reference range in dogs with HC. However, in humans, higher concentrations of CK are described in people with myotonic dystrophy and a mild increase was also reported in a case report of a Chow Chow with congenital myotonia. Myotonia, delayed muscle relaxation after voluntary contraction or percussion, occurs in humans, goats, horses, mice, and dogs.ACTH and proopiomelanocortin mutations occur in human dystrophic myotonia.In dogs, myotonic signs occur in association with various muscle diseases, as a congenital condition, and in association with HC.Fewer than 20 dogs with HC and concurrent myotonia or SMS are reported.In the present study, the case summaries submitted by 14 colleagues from 10 institutions located in widely separated geographic areas yielded only 37 dogs with concomitant HC and SMS, underscoring the uncommon nature of this combination of conditions. The most common clinical musculoskeletal sign in dogs with HC is non-painful weakness, recognized by most owners as difficulty rising, abdominal distension,heavy duty propagation trays and reduced exercise tolerance.As many as 85% of dogs with HC have been considered weak by the owner and veterinarian,and it is assumed that most of the remaining 15% have a sub-clinical weakness.

Steroid-induced Type II muscle atrophy is common in dogs with iatrogenic and naturally occurring HC and since muscle atrophy is a likely component of muscle weakness, it is unlikely that muscle rigidity is a direct consequence of cortisol action. Several possible mechanisms to explain SMS in dogs with HC have been proposed: intracellular potassium concentrations under the reference range, abnormal calcium metabolism, higher glucocorticoid-induced protein catabolism, and alterations in the synthesis of myofibrillar proteins.However, the pathogenesis remains unclear. Observing signs of progressive muscle stiffness is subjective for both owners and veterinarians. One report suggested that SMS appeared well after observing other clinical signs of HC.In another report, HC and SMS were diagnosed at about the same time.The time of SMS diagnosis versus time of HC diagnosis in the 37 dogs of this study varied. Twenty-three of 37 dogs were diagnosed with HC 1 month to 3 years before being diagnosed with SMS, 3 were diagnosed with HC and SMS at about the same time, and 11 dogs were diagnosed as having SMS 1 month to 1 year before being diagnosed with HC. Similar to the earlier reports, the limbs involved varied. The majority of dogs were affected in the pelvic limbs first while 24% had all 4 limbs affected when diagnosed. All dogs diagnosed as having HC and SMS that underwent EGM examinations had myotonic discharges. In these dogs no other abnormalities were identified on muscle biopsies. Despite successful medical management of HC in 28 of 36 treated dogs, only 5 dogs exhibited “mild” SMS improvement, which was followed in each dog by persistent SMS. The SMS persisted or worsened from the time of diagnosis in 31 dogs for which 19 dogs were treated with sodium channel blockers, for example, mexiletine, and muscle relaxing drugs, for example, methocarbamol, dantrolene, cyclobenzaprine, benzodiazepines, calcium antagonists, and L-carnitine. Such drugs have been efficacious in managing humans with myotonia.

One of 36 treated dogs included in this study received botulinum toxin, which was associated with mild muscle relaxation. Botulinum for humans with myotonia has been beneficial.Two of 36 dogs treated with physiotherapy exhibited mild muscle relaxation, but there have been no reports of responses to physiotherapy in people with myotonia. The use of cannabinoids and acupuncture resulted in a mild to moderate improvement in 1 dog. Acupuncture has been of some value in humans. Use of cannabinoids in people with myotonia has been associated with some positive results.Too few dogs were treated with any single modality to draw conclusions about efficacy. In addition to worsening limb stiffness, 2/36 of dogs developed progressive difficulty eating and drinking because of masticatory muscle involvement. Inability to eat or drink because of masticatory muscle involvement has not been previously described. One dog did have masticatory muscle abnormalities on EMG in a previous report, but clinical manifestations were not discussed.Masticatory muscle involvement has been described in both human myotonic dystrophy and myotonia congenita.Similar to the 2 dogs in this study, masticatory muscle involvement described in humans was preceded by leg muscle involvement. The goal of treating dogs for HC is resolution of clinical signs, achieved by lowering circulating cortisol concentrations.Whether dogs are treated with mitotane or trilostane, owner opinion is recognized as key when determining if a dogs’ signs have completely or partially resolved versus dogs with no response. There is no consensus on laboratory testing to aid in monitoring trilostane treatment for dogs with HC and the ACTHst is recognized as ideal for monitoring mitotane treatment.Owner observations were a key component in managing the 35 dogs that survived >1 year. In addition, all dogs in this study treated with mitotane were monitored with ACTHst results while dogs treated with trilostane were monitored with ACTHst results or prepill serum cortisols. The duration of survival after diagnosis of SMS in 3 dogs with HC in previous studies were 2383, 1902, and 1182 days, respectively.The median survival time from initial diagnosis of SMS in the dogs included in this study was 963 days .

The median survival time for dogs with HC when treated with trilostane or mitotane has been reported to be 549 to 998 days.Despite most dogs having persistent or worsening SMS, owners chose continued care. However, owners of 50% of the dogs in this study ultimately chose euthanasia because of persistent or worsening SMS, highlighting the impact that this condition can have on the dog’s and owner’s quality of life.There are several limitations of this study. None of the dogs in this study were treated with hypophysectomy or with drugs targeting the pituitary gland or hypothalamus. Limitations were also associated with the retrospective design of this project and the inclusion of cases from multiple hospitals. Multi-institutional case management was necessary because of the rarity of SMS with HC in a canine population. However, this factor introduced differences in data collection, follow-up, and case treatment based on clinician discretion and varying institutional protocols.Craving for substances is considered essential for understanding the pathogenesis and maintenance of addiction,vertical cannabis as highlighted by the incentive salience model and for the inclusion of craving as a criterion for substance use disorder in the Diagnostic and Statistical Manual of Mental Disorders and the International Classification of Diseases. Nicotine craving specifically has been shown to predict lapse to cigarette smoking following cessation and is frequently identified by individuals as an important barrier to quitting and maintaining abstinence. Thus, craving represents a clinically important phenotype of nicotine addiction with great potential for intervention. Accurate assessment of craving is essential for the identification, management, and treatment of nicotine and tobacco product use and the use of other substances. In human laboratory studies, craving for nicotine and other abused substances is commonly measured using the cue-exposure paradigm. The translational value of the cue-exposure paradigm to the naturalistic environment is predicated on the observation that relapse to drug use is often precipitated by exposure to drug-related cues that provoke craving. However, naturalistic cues can be very complex and involve a number of contextual factors that are difficult to replicate in laboratory-based cue-exposure paradigms, limiting their ability to invoke a true craving state. New technologies such as virtual reality afford the opportunity to increase the ecological validity of cue-exposure paradigms through the implementation of interactive and immersive presentations of cues within the typical context of use , greatly enhancing our ability to invoke craving in the laboratory. Studies using VR cue-exposure have found great support for its effectiveness in inducing subjective, and in some cases objective, craving for tobacco, as well as alcohol, cannabis, and methamphetamine. Furthermore, despite decades of research, the field of addiction has yet to establish reliable, objective measures of craving.

A number of objective correlates of craving have been investigated, including psychophysiological and neurological measures with varying success. Attentional bias, or the ability of drug cues to capture the attention of the user, can be conceptualized as a behavioral marker of incentive salience and represents an objectively measurable and clinically important phenomenon for the study of addiction. Attentional bias toward smoking cues has been previously demonstrated among regular tobacco smokers, and importantly, it has been related to the risk of subsequent relapse following smoking cessation. Multiple theoretical models suggest that cue-induced subjective craving and attentional bias reflect closely linked underlying processes. Not surprisingly, measures of attentional bias have been shown to correlate with subjective craving. However, the method of assessment appears to be key—direct measures of attention such as the assessment of eye movement, exhibit larger craving correlations and greater reliability than indirect measures such as reaction time. Assessment within naturalistic settings has also independently improved the reliability and validity of attentional bias measurement; yet, the naturalistic constraints of these methods prohibit advanced clinical application of these paradigms. New technological advances in VR implementation allow for the assessment of eye movement in a noninvasive and cost-effective manner and demonstrate early success in distinguishing smokers and nonsmokers on the basis of eye fixations to smoking cues in a virtual world. Spontaneous eye blink rate represents another, much less studied, potential objective correlate of cue-induced craving. EBR has been closely linked with striatal dopaminergic function and has been advanced as a reliable, more cost-effective, and minimally invasive alternative to positron emission tomography to assess dopaminergic functioning. Dopamine release in the basal ganglia inhibits the spinal trigeminal complex, leading to increased EBRs, as demonstrated in both rat and human trials. In line with this theory, preclinical research has shown that direct dopaminergic agonists and antagonists increase and decrease EBRs, respectively. Furthermore, a PET study in monkeys found a strong positive correlation between EBRs and dopamine or D2 -like receptor availability in the striatum. Given the observed modulation of striatal dopamine during cue-elicited substance craving, it may be possible to detect NTP cue-induced dopamine changes through EBR measurement. Nonetheless, no studies to date have investigated this hypothesis. Lastly, pupillometry represents an additional potential objective craving correlate. Pupil dilation is an indirect measure of norepinephrine release from the locus coeruleus and is associated with reward processing, including sensitivity to rewards, and engagement of cognitive resources. Pupillary responses also seem to index changes in the allocation of attention and have been advanced as an ideal measure for related constructs that may not pass the threshold for overt behavior or conscious appraisal. To our knowledge, only one study has investigated pupillometry as a measure of response to substance cue-exposure. Kvamme et al found that pupillary bias toward alcohol versus neutral cues, but not subjective craving reports, predicted relapse to alcohol use in a sample of detoxified patients with alcohol dependence, suggesting that cue-induced changes in pupillometry may ultimately serve as a useful biomarker for addiction research and clinical care. This study was intended to outline the methods underlying the development of a novel VR-NTP cue-exposure paradigm with embedded eye-characteristic assessments. Preliminary analyses on a pilot sample of participants are also provided as a proof of concept for the potential utility of this paradigm for the induction of subjective craving in the laboratory, assessment of potential biomarkers of craving , and prediction of NTP use behaviors. The NTP Cue VR paradigm uses a virtual reality environment built using Unity. HTC’s SRanipal SDK was used in conjunction with Tobii’s Tobii XR SDK to provide access to various data from the eye tracker. Specifically, Tobii XR SDK handled object selections, determining what participants were looking at, with its Gaze-to-Object Mapping algorithm, while the rest of the data were retrieved from the SRanipal SDK.

Adverse events by symptom category did not significantly differ between medication groups

Randomization was stratified by sex and participant report of experiences with withdrawal-related dysphoria. This a-priori stratification variable was intended to capture the “dark side of addiction” , whereby individuals reporting withdrawal-related dysphoria were estimated to experience greater dysfunction of the immune system. MediciNova, Inc. supplied ibudilast and placebo for the trial but did not provide any financial support for the study. The UCLA Research pharmacy prepared and dispensed all study medication in blister packs. Research staff, participants, and providers remained blind to medication condition during the trial. Participants were titrated on ibudilast as follows: 20 mg BID during days 1–2 and 50 mg BID during days 3–14. Target medication dose was selected based on safety considerations as well as preclinical and clinical data . Medication compliance was monitored through pill counts and self-report via DDA. Side effects were closely monitored and reviewed by study physicians. During the in-person screening visit, participants completed a set of assessments for individual differences and eligibility screening. Assessments included collection of demographic information , substance use characteristics and history, and psychological functioning and diagnoses. Surveys used to characterize the sample included the Beck Depression Inventory to assess levels of depression symptomatology, the Snaith-Hamilton Pleasure Scale to measure anhedonia , the Alcohol Use Disorders Identification Test to capture alcohol problem severity, Penn Alcohol Craving Scale to measure tonic craving levels, and the Reasons for Heavy Drinking Questionnaire to capture one’s motivations for heavy drinking. In addition, the RHDQ determined the presence of withdrawal-related dysphoria for randomization stratification as follows: raw scores ranging from 0 – 10 on the RHDQ question #6: “I drink because when I stop, I feel bad ”, were dichotomized into yes /no, based on a cut-off of 6+ points. Interviews used to determine eligibility criteria and determine baseline quantity and frequency of alcohol use were administered by clinical graduate students or trained research staff and included the Timeline FollowBack measuring alcohol, cigarette,hydroponic drain table and cannabis use over the previous 30 days , the Clinical Institute Withdrawal Assessment for Alcohol Scale – Revised assessing clinically significant alcohol withdrawal, and the Structured Clinical Interview for DSM-5 to determine current AUD diagnosis and severity and to screen for exclusionary psychiatric diagnoses.

Each morning throughout the two-week trial, participants were asked to retrospectively report on their previous day experiences by completing an electronic DDA survey . Study staff provided instructions on DDA completion and participants practiced filling out the survey at their randomization visit. Daily text messages or emails containing links to DDAs were sent to participants at 8am each morning during their 14-day medication period. Additional telephone or text reminders were sent by study staff as needed. At the start of each daily survey, participants were asked, “Did you drink any alcohol yesterday?” If participants endorsed alcohol use the previous day, they reported on drink type and quantity, and then completed two sets of items: 1) ratings of mood, craving, and urge before drinking, and 2) ratings of mood, craving, urge, stimulation, and sedation while drinking. For example, participants were asked: “Before you drank, how strong was your urge to drink alcohol yesterday?” and “While drinking, how strong was your urge to drink alcohol yesterday?” The current analyses focus primarily on drinking days, given our interest in medication-related changes in subjective response to alcohol. Mood states were assessed via the short form of the Profile of Mood States survey . POMS-SF is a standard, validated psychological rating scale that measures dimensions of transient mood states by asking subjects to indicate how well each item describes their mood on a 5-point Likert scale . To keep the survey brief and thus reduce the burden on participants, only select items from POMS-SF were chosen for DDAs . Reports of stimulation and sedation were assessed via the validated Brief Biphasic Alcohol Effects Scale . The B-BAES is a six-item measure on the acute stimulant and sedative effects of alcohol on an 11-point scale . Urge to drink was captured via the item, “How strong was your urge to drink alcohol yesterday” , in line with previously published reports . Phasic cravingwas assessed using the first and last items from the validated Alcohol Urge Questionnaire on a 7-point Likert scale . Participants reported the quantity of standard alcoholic drinks consumed according to established guidelines and provided details about non-standard drinks . Drink entries were reviewed and verified by study staff.

DDA Item Scoring—All descriptive and statistical analyses were completed in SAS Version 9.4 on the sample of participants who completed at least one DDA. Select items from the POMS-SF tension and depression subscales , were summed to form a negative mood state score and select items from the vigor subscale were summed to form a positive mood state score for each timepoint, consistent with previous reports . The two AUQ items were summed to form a craving score . Stimulation and sedation subscales from B-BAES were calculated using standard methods . Multilevel Models—Models were fit in SAS using the MIXED procedure and a multilevel framework, unstructured covariance matrix, residual maximum likelihood estimation, and random intercepts with observations nested within subjects to account for clustering and to preserve suitable Type-1 error rates . Kenward-Rogers degrees of freedom were chosen to reduce bias and obtain more accurate p-value estimates. Main and simple effects were probed by recentering dichotomous variables and using the simple slopes approach. Daily alcohol use quantity, mood states, craving, and urge data from non-drinking days were treated as missing. Comparable three-level models were fit for variables having both before and during drinking observations , such that these observations were nested within day and days were nested within subjects. All models were tested with the following level-2 covariates: sex, AUD severity , and baseline drinks per drinking day . In addition, daily number of drinks consumed during the trial was included as a predictor with random effect in all subjective response models to account and control for potential day-level drink quantity effects on subjective response. To examine both between- and within-subject effects and interactions, covariates were centered at the grand mean and focal within-subject variables were centered within cluster . Specifically, to assess the effect of medication on the acute stimulant and sedative effects of alcohol, one model for each B-BAES subscale was estimated in which stimulation or sedation served as the outcome and medication condition served as the focal predictor.

To assess the effect of medication on alcohol-induced changes in mood and craving, three-level models were run for each positive mood, negative mood, craving, and urge scores, as predicted by medication condition, time and a medication × time interaction.Two sets of exploratory analyses were conducted. First, to explore how medication effects might impact drinking outcomes, we tested whether ibudilast moderated the effect of stimulation/ sedation on same-day drinking during the trial, given support for these variables as strong predictors of alcohol use . As such, a within-subject cross-level interaction of medication × stimulation or sedation was added with random slopes, and same-day number of drinks served as the outcome. In a similar fashion,rolling benches hydroponics we also tested whether ibudilast moderated the effect on stimulation/ sedation on next-day drinking using cross-lagged logistic models; this analysis served to test whether subjective response predicted future drinking behaviors. Second, given the trial’s a priori interest in a withdrawal-related dysphoria characteristic, we tested whether dysphoria would moderate ibudilast’s effects on alcohol-induced changes in mood and craving. A three-way interaction was added to models estimating the outcomes- positive mood, negative mood, urge, and craving . Stimulation and sedation variables were limited to a single time point and were thus excluded from analyses testing before to during drinking changes. The final sample of randomized participants who completed at least one DDA, consisted of 50 non-treatment seeking individuals with current AUD . Overall, 66% of the sample reported their sex as male, 68% reported an annual household income < $60,000, and the average age was 32.7 years . Regarding race, participants most frequently identified as White , followed by 14% Black or African American, and 12% mixed race. In addition, 24% of the sample identified as Hispanic/ Latinx. Participants had an average of 5.6 DPDD in the month prior to their baseline visit. Medication adherence was high, as both medication groups exceeded 97% adherence rates.In this secondary analysis, we tested bio-behavioral mechanisms of ibudilast, a neuroimmune modulator, through naturalistic daily reporting of subjective response to alcohol collected during a two-week RCT enrolling 50 non-treatment seeking participants with AUD. Electronic DDAs were administered each morning to participants to capture their previous day drinking behaviors and subjective alcohol response measures. First, we were interested in understanding whether ibudilast altered average levels of stimulation and sedation during drinking episodes. Results showed that ibudilast treatment did not significantly change average levels of stimulation nor sedation during the trial compared with placebo.

These findings are consistent with an initial safety trial in which ibudilast did not significantly affect any subjective response variables during an experimentally controlled alcohol infusion in the laboratory . Relatedly, a trial combining laboratory and EMA methods showed that topiramate reduced drinking-related craving but not the stimulant or sedative effects of alcohol . However, animal literature shows that apremilast, another PDE inhibitor, did alter a wide range of ethanol-induced effects in mice, such as reducing acute functional tolerance and increasing the sedative, intoxicating effects, and aversive properties of ethanol . Perhaps unlike certain pharmacotherapies for AUD such as naltrexone, neuroimmune modulators, like ibudilast may not reduce drinking by robustly suppressing alcohol’s stimulant properties or amplifying its sedative effects. Rather, ibudilast may more directly alter other central mechanisms like alcohol craving or may exert a wider range of effects on multiple mechanisms that cumulatively impact drinking outcomes. Second, we tested a related exploratory aim examining the moderating effect of ibudilast on alcohol-related stimulation and sedation and same-day number of drinks consumed. Participants on ibudilast reported a significant, positive relationship between their stimulation and sedation ratings and same-day drinking levels, neither of which was observed in the placebo condition. This suggests that participants randomized to ibudilast consumed more alcohol on days when they retrospectively reported feeling more stimulated during a drinking episode than on days when they felt less stimulated . Yet for those on placebo, we did not detect a significant relationship between one’s feelings of stimulation or sedation and alcohol use. These findings are consistent with EMA data showing that naltrexone potentiated participant’s subjective “high” across rising levels of estimated BrAC . Similarly, topiramate was shown to strengthen2021. These results are also in line with a secondary analysis of our lab’s initial efficacy trial, whereby ibudilast potentiated the association between mood states and one’s craving for alcohol following a stress exposure paradigm compared with placebo . Mechanistically, PDE4 inhibitors attenuate alcohol-induced neuroimmune activation and dysregulation of GABAergic signaling . These important processes are connected to behavioral responses to ethanol . Thus, micro-longitudinal reports collected during the current trial helped to elucidate dynamic, day-to-day associations between within-person subjective effects and drinking, such that ibudilast seemed to moderate these relationships for a given individual, rather than by altering average subjective response levels across participants. For our second primary aim, we assessed whether ibudilast, compared with placebo, attenuated daily alcohol-induced changes in positive mood, negative mood, urge, and craving . Among the full sample, we found that ibudilast significantly dampened within-person alcohol-induced increases in craving seen under the placebo condition, but not other subjective response indicators. This suggests that one of the mechanisms by which ibudilast exerts its effects on drinking outcomes, such as reductions in heavy drinking, may be by diminishing one’s desire to continue drinking during an episode. Considering its immunomodulatory actions, ibudilast may reduce the acute and chronic proinflammatory effects of alcohol, either indirectly through suppression of peripheral inflammation or directly by altering cAMP signaling pathways and suppressing cytokine expression and in the brain . In return, acute alcohol-induced increases in craving are blunted. Supporting these findings is research on methamphetamine use disorder . An RCT for inpatients with MUD showed that ibudilast significantly blunted the rewarding effects of methamphetamine during an infusion in the laboratory and similarly diminished drug-induced increases in proinflammatory levels during infusion . Continuing, previous results from our group implicate ibudilast in the reduction of tonic craving and neural alcohol-cue reactivity, as evidenced by attenuation of cue-elicited activation in the ventral striatum compared with placebo . It is thus plausible that reductions in alcohol craving and reward, across these contexts, represent a primary mechanism of action of ibudilast for AUD.

Controls were studied on a nonresidential basis and completed the measures 22 days apart on average

The exclusion criteria were: central nervous system, cardiovascular, pulmonary, hepatic, or systemic disease; HIV seropositive status; pregnancy; lack of English fluency; MRI ineligibility ; current use of psychotropic medications; current Axis I disorder including substance abuse or dependence for any substance other than nicotine . A diagnosis of MA dependence and a positive urine test for MA at intake were required for MA-group participants, who completed the study as inpatients at the UCLA General Clinical Research Center, and were prohibited from using any drugs for 4–7 days before testing. MA users completed the behavioral and imaging measures 2 days apart on average .All participants were required to provide a urine sample on each test day that was negative for amphetamine, cocaine, MA, benzodiazepines, opioids, and cannabis. Compensation was provided in the form of cash, gift certificates, and vouchers.Delay discounting was assessed with the Monetary-Choice Questionnaire , which presents participants with 27 hypothetical choices between a smaller, immediate monetary amount and a larger, delayed alternative. Most of the participants completed the task using a paper-and-pencil format, but some completed the task on a computer ; the questions were presented in the same sequence, regardless of task format. A logistic regression was performed on the data from each participant, separately, using his/her responses to all 27 choices as the dependent variable, and the natural log of the equivalence k value associated with each test question as the independent variable. This k-equivalence value was the number that would equalize the immediate option with the delayed alternative,flood tray assuming the hyperbolic discounting function: V = A/, where V represents the perceived value of amount A made available D days in the future .

The parameter estimates from the logistic regression were used to calculate the k-equivalence value at which the function intersected 0.5 . This derived k value characterized the individual’s discount rate . Because the MCQ only probes discounting between a minimum k-equivalence of 0.0002 and a maximum of 0.25, these values were designated as the minimum and maximum k values, respectively, that could be assigned.Dopamine D2 /D3 receptor availability was assessed using a Siemens EXACT HR+ positron emission tomography scanner in 3D mode with [18F]fally pride as the radioligand . Following a 7min transmission scan acquired using a rotating 68Ge/68Ga rod source to measure and correct for attenuation, PET dynamic data acquisition was initiated with a bolus injection of [18F]fally pride . Emission data were acquired in two 80min blocks, separated by a 10–20min break. Raw PET data were corrected for decay, attenuation, and scatter, and then reconstructed using ordered-subsets expectation-maximization , using ECAT v7.2 software . Reconstructed data were combined into 16 images , and the images were motion-corrected using FSL McFLIRT , and co-registered to the individual’s structural MRI scan image using a six-parameter, rigidbody transformation computed with the ART software package . Structural images were magnetization prepared, rapid-acquisition, gradient-echo scans acquired during a separate session using a Siemens Sonata 1.5T MRI scanner. All images were registered to MNI152 space using FSL FLIRT . Volumes of interest were derived from the Harvard-Oxford atlases transformed into individual native space, or defined using FSL FIRST . VOIs of the functional striatal subdivisions were created as described previously . Time-activity data within VOIs were imported into the PMOD 3.2 kinetic modeling analysis program , and time-activity curves were fit using the simplified reference tissue model 2 . The cerebellum was used as the reference region . The rate constant for transfer of the tracer from the reference region to plasma was computed as the volume-weighted average of estimates from receptor-rich regions , calculated using the simplified reference tissue model , as suggested by Ichise et al. .

Time-activity curves were re-fit using SRTM2 , with the computed k2 ′ value applied to all brain regions. Regional binding potential referred to non-displaceable binding, calculated as BPND = − 1), where R1 = K1 /K1 ′ is the ratio of tracer-delivery parameters for the tissue of interest and reference tissue, and k2a is the effective rate parameter for transfer of tracer from the tissue of interest to the plasma . Volume-weighted bilateral averages of all VOIs were used for analyses.Continuous variables were assessed for homogeneity of variance across groups using Levene’s tests. Demographic variables were examined for group differences using two-tailed independent samples t-tests, Mann-Whitney U-tests, or Fisher’s exact tests, as appropriate. Group differences in discount rate and BPND were tested using separate independent samples t-tests, and potential confounding variables were assessed as covariates. As expected, the distribution of discount rates was positively skewed. Because a natural log transform yielded a more normal distribution, ln was used for analyses. The threshold for statistical significance was set at α = 0.05 for all analyses. Onetailed p-values are reported for analyses where a specific directional effect was predicted . Exploratory analyses were also carried out to investigate whether discount rate is negatively correlated with BPND in extrastriatal regions. These analyses were restricted to regions that exhibit appreciable [18F]fallypride BPND .In line with previous reports, MA users displayed lower striatal D2 /D3 receptor availability and higher discount rates than controls, on average. As hypothesized, discount rate was significantly negatively correlated with striatal D2 /D3 receptor availability in the combined sample and among MA users alone. Although the slopes of the striatal correlations were not significantly different between controls and MA users, the relationship did not reach statistical significance among controls alone. Exploratory analyses revealed negative relationships between discount rate and D2 /D3 receptor availability in every extrastriatal region examined among MA users, but none retained significance following correction for multiple comparisons. While substantial evidence implicates dopamine as a key determinant of intertemporal choice , this study is the first to link temporal discounting directly with a measure of dopamine signaling capacity.

The findings indicate that deficient D2 /D3 receptor availability may contribute to steep temporal discounting among individuals with substance use disorders, attention-deficit hyperactivity disorder, or obesity , and carriers of the A1 allele of the ANKK1 Taq1A polymorphism . This reasoning is supported by reports that rats treated chronically with MA or cocaine display evidence of greater discounting of delayed rewards than saline-treated rats , as both of both of these stimulants induce persistent reductions in striatal D2 /D3 receptor availability in rats and monkeys following chronic exposure. The results are also compatible with the literature concerning the neuroanatomical substrates of intertemporal choice. There was evidence of correlations involving several brain regions that have been implicated by functional neuroimaging and lesion studies as playing a role in selecting between immediate and delayed rewards: e.g. the midbrain, dorsal striatum, globus pallidus, thalamus, amygdala, hippocampus, ACC, and insula . The prefrontal cortex is critically important for the ability to resist temptation for instant gratification in order to achieve long-term goals , and striatal D2 /D3 receptor availability modulates PFC activity when goal-directed choices are made . Moreover, D2 /D3 receptor availability in the putamen has been shown to be negatively correlated with glucose metabolism in the orbitofrontal cortex, which is implicated in delaying gratification , especially among MA users . Choosing a smaller, more immediately available reward over a larger, more delayed alternative can be considered as an impulsive choice. However, while striatal D2 /D3 receptor availability has been shown to be negatively correlated with trait impulsivity among MA users , there was no evidence that BIS-11 total scores were correlated with discount rates in this sample of participants . Still, as expected, total BIS-11 scores were robustly higher among MA users than controls on average in this sample,grow table and were negatively correlated with striatal D2 /D3 receptor availability when controlling for age in the combined sample . This result implies that even though both trait impulsivity and temporal discounting are related to striatal D2 /D3 receptor availability, they represent at least partially separable constructs. One limitation of this study is that [18F]fallypride has comparably high affinity for both D2 and D3 dopamine receptors , particularly as levels of D3 receptors may be higher than once estimated in multiple brain regions, including the striatum . Nevertheless, several lines of research suggest that individuals with substance use disorders, including MA users , have higher densities of D3 receptor levels in striatal and extrastriatal brain regions than those who do not frequently abuse drugs . Thus, it seems probable that the lower [18F]fallypride BPND among MA users primarily reflects lower D2 receptor availability in this group compared to controls. An additional limitation includes the possibility of competition with endogenous dopamine influencing [18F]fallypride BPND . That IQ was not assessed is also a limitation, because IQ has been found to be significantly correlated with delay discounting , and a group difference in the former could therefore overshadow the true group difference in the latter.

There are also some caveats that should be considered when interpreting the results of this study. First, the MCQ has limited ability to provide precise estimates of discount rates for individuals who discount very steeply. That is, the choice items only probe preference up to a maximum k-equivalence value of 0.25, and this value was assigned as a conservative estimate of discount rate to individuals whose calculated k value was predicted to exceed this value . Second, BPND values were highly correlated across all VOIs examined in both groups of participants, which limits the ability to draw conclusions regarding the relative importance of D2 /D3 receptor availability in specific brain regions to discount rate. Finally, as there is some evidence that abstinence from drugs can increase temporal discounting among addicted individuals , it is possible that abstinence from MA may have amplified the difference in discount rate between MA users and controls. The results of this study may help to explain why low striatal D2 /D3 receptor availability is associated with poor treatment response among individuals with MA dependence and cocaine dependence . This view seems reasonable given that steep temporal discounting has also been linked with poor treatment response among coca inedependent individuals , and predicts relapse among smokers . The present results lend empirical support to a theoretical model in which Trifilieff and Martinez propose that, “low D2 receptor levels and dopamine transmission in the ventral striatum lead to impulsive behavior, including the choice for smaller, immediate rewards over larger, but delayed or more effortful, rewards, which may represent an underlying behavioral pattern in addiction.” Consistent with this model, we found evidence of a negative correlation between discount rate and D2 / D3 receptor availability in the limbic striatal subdivision . The correlation in the limbic striatum did not reach statistical significance, possibly due to the high D3 /D2 receptor ratio in this region and partial volume effects. An important question for future research is to determine whether interventions that increase D2 /D3 receptor availability can reduce temporal discounting, at least among those with low D2 /D3 receptor availability. Pharmacological interventions could prove useful to this end. For example, varenicline increases striatal D2 /D3 receptor availability in rats , and in a study of human smokers, males treated with varenicline showed lower temporal discounting than placebo-treated controls . This finding is compelling considering that dorsal striatal D2 /D3 receptor availability is lower in male smokers compared to nonsmoker controls . There also is evidence that rimonabant increases striatal D2 /D3 receptor availability , and can decrease discounting of delayed rewards in rats . Nonpharmacological approaches may be useful as well, as there is preliminary evidence that intensive exercise can increase striatal D2 /D3 receptor availability in MA-dependent individuals and patients with early-stage Parkinson’s disease . Similarly, in a mouse model of Parkinson’s disease, higher striatal D2 /D3 receptor availability and D2 receptor expression was noted among those exposed to high-intensity exercise relative to non-exercising controls . Establishing a causal link between D2 /D3 receptor availability and temporal discounting is likely to have significant clinical implications. This is because there is evidence that interventions that reduce temporal discounting are useful for treating disorders that are associated with both steep discounting and low striatal D2 /D3 receptor availability. For instance, contingency management decreases discounting among cocaine-dependent individuals and smokers , and methylphenidate decreases discounting in children with attention-deficit hyperactivity disorder .

Psychiatric treatment for patients with OUD can be combined with OUD pharmacotherapy and self-help groups

Conversely, other studies have found that individuals with comorbid opioid and psychiatric disorders have equivalent or better treatment outcomes, such as improved negative urine drug assays, longer treatment engagement, and better medication adherence . These conflicting findings indicate treatment outcomes may be different by type of psychiatric condition and influenced by the duration of observation, which underscores the need for additional evidence on the impact of psychiatric comorbidities on treatment outcomes among patients with OUD. We aimed to address this gap in knowledge by examining a longitudinal cohort study of patients with OUD to assess different types of psychiatric disorders in relation to treatment experiences. We conducted a secondary analysis of data provided by the Starting Treatment with Agonist Replacement Therapies study , which was conducted at nine federally licensed opioid treatment program sites with 1269 participants randomized to buprenorphine or methadone from 2006 to 2009 . All participants were tapered off their assigned study medications by 32 weeks post-randomization. Any OUD pharmacotherapy received during the follow-up interval was arranged by the participants themselves independent of the study and could change over time. Analyses also included data from a follow-up study of all randomized participants conducted from 2011 to 2016, nearly 2–8 years after randomization, performing three assessments 1 year apart . After participants provided written informed consent, face-to-face interviews and urine samples were collected at the first follow-ups at each site . The second and third follow-ups were conducted by research staff via phone interviews. Participants were compensated for each visit according to local site policies for study testing and assessments .

The parent study and the follow-up study were funded by the National Institute on Drug Abuse Clinical Trials Network . The studies were approved by the Institutional Review Boards at each site, the State of California, and UCLA. A federal Certificate of Confidentiality was also obtained to protect participants’ information further. At the outset of the follow-up study, two sites were dropped,seedling grow rack accounting for 189 participants due to small sample sizes and difficulties with conducting follow-ups. Hence, 1080 study participants were ultimately targeted for the three follow-up visits. At the first follow-up interview , conducted August 2011–April 2014, 965 participants were located, and 797 were interviewed . At the second follow-up interview , conducted August 2012-June 2016, 723 participants from the group who completed Visit 1 were administered the Mini-International Neuropsychiatric Interview ; of these, 597 were again interviewed , from December 2013–June 2016, as the final followup interview . We omitted patients with eating disorders and psychotic disorders for the present paper, yielding a final analysis sample of 593 participants who completed all assessments. The mean length of the follow-up period among 593 participants was 6.5 years . The study flowchart provides additional details . The MINI was used at Visit 2 to assess psychiatric disorders according to DSM-IV criteria. The MINI includes modules on current diagnosis of different types of psychiatric disorders. We used indicators of current diagnoses to construct four mutually exclusive groups: 1) bipolar disorder , 2) major depressive disorder , 3) anxiety disorders , and 4) no mental disorder . Some participants had several mental health conditions . Thus, drawing on prior research , we used the following hierarchy to categorize participants into one group based on diagnostic severity. First, those with any current BPD diagnosis were assigned to the BPD group, regardless of other non-SUD mental health diagnoses and the presence of psychotic features. The MDD and AXD groups were then similarly constructed. The remaining participants did not have any current non-SUD mental disorders and therefore were categorized to the NMD group. It is important to note that post-traumatic stress disorder was included as an anxiety disorder in this study, consistent with DSM-IV classification, given that the data collection was initiated before the publication of the DSM-5, at which time PTSD was recategorized.

Chi-square test for categorical variables and ANOVA for continuous variables were used to compare group differences in baseline characteristics, treatment engagement measured from Visit 2 to Visit 3, substance use, ASI composite scores, BSI scale scores, and SF-36 physical and mental component summary scores at Visit 3. Also, pairwise comparisons were conducted using the Bonferroni correction for categorical variables and the Tukey-Kramer method for continuous variables. Except for pairwise comparisons, all other two-tailed tests with a p-value less than 0.05 were considered statistically significant. All data analyses were performed in SAS version 9.4 . Table 4 presents the group differences in addiction severity , physical and psychiatric symptoms , and quality of life at Visit 3. Compared to the NMD group, each of the three psychiatric disorder groups had greater problem severity in 6 of 7 domains , worse symptoms in all 10 measures of physical and psychiatric health, and poorer quality of life. Among three groups with mental disorders, participants with BPD had the worst physical and psychiatric symptoms. In the sensitivity analysis, adding the excluded 2 participants with eating disorders and 2 with psychotic disorders in the AXD group as a new group did not change the results . Attrition analysis revealed no statistically significant differences in the demographics of those interviewed and not interviewed except for gender . This study aimed to characterize psychiatric disorders and their association with long-term treatment outcomes among individuals initially treated with methadone or buprenorphine for OUD in the START study. In our follow-up study, we found that the participants without mental disorders had the lowest proportion of females, injection drug use, and history of psychiatric disorders at baseline. During follow-up visits, those with MDD had a higher proportion of follow-up months with OUD pharmacotherapy than those without mental disorders. At the end of the follow-up, participants with BPD had significantly more days of using heroin and all opioids in the past 30 days. Furthermore, those with comorbid psychiatric disorders showed more severe substance-related conditions, psychosocial functioning, and psychiatric symptoms at the end of follow-up. It has been well-established by previous studies that women are more likely than men to be diagnosed with a mental health condition . We also found that the prevalence of injection drug use at baseline was higher among patients with OUD and comorbid psychiatric disorders. Other studies have reported that psychiatric and substance abuse comorbidity is highly prevalent among people who inject drugs .

Taken together, these findings replicate prior evidence and highlight the need to design treatments and other interventions that are sensitive to gender and infectious disease risk behaviors. We also found that over 5 or more years of observation, patients with co-occurring opioid and major depressive disorders engaged with OUD pharmacotherapy for more months during follow-up than those without mental disorders. The continued high utilization of pharmacotherapy among patients with OUD and comorbid psychiatric disorders compared to those without mental disorders is notable and may have several explanations. Findings from the literature on the association between psychiatric comorbidity and treatment engagement have been inconsistent . Possible reasons for inconsistent results include different outcome variables, multiple types of medication used, and different diagnostic criteria for psychiatric disorders. However, MDD diagnosis has been associated with improved opioid treatment outcomes in prior research, possibly related to greater engagement in treatment , and that depression symptoms are associated with higher motivation to change opioid use . In the current study, we found higher utilization of methadone than buprenorphine by participants,greenhouse growing racks which may be explained by methadone clinic procedures. Patients receiving methadone were required to attend a clinic daily to obtain medication following regulations regarding methadone dispensing and thus were more regularly in contact with the clinic personnel, which likely enhanced treatment engagement. Conversely, buprenorphine patients were not required to attend the clinic daily, given the nature of buprenorphine self administration without supervision. Another explanation is that methadone treatment was more accessible to this group of individuals who were largely impoverished. At the end of the follow-up, more than 5 years after baseline, participants with BPD had significantly more heroin and other opioid use in the past 30-days. This finding further supports the claim that some patients with OUD and comorbid psychiatric disorders may have higher rates of opioid use due to their greater psychiatric symptom severity . Consistent with previous studies , patients with OUD and comorbid psychiatric disorders reported poor functioning across multiple domains. Numerous significant group differences in components of ASI composite scores, BSI scale scores, SF-36 physical and mental component summary scores indicated higher problem severity across multiple problem areas in patients with OUD and different comorbid psychiatric disorders. Based on severity, participants with BPD had the poorest functional outcomes.

Since the 1970s and 80 s, a number of studies demonstrated that psychotherapy can be used effectively with individuals with SUDs . To reduce healthcare costs, however, support was reduced for these psychiatrically focused treatments. These findings point to an unmet need for medication and psychosocial therapies for patients with OUD and psychiatric comorbidity. This study has several limitations. First, we assessed the type of psychiatric disorders at follow-up Visit 2. Although the question about the history of psychiatric disorders was included at treatment entry , the pre-existing diagnosis patterns according to objective measures and the temporal relationship between OUD and psychiatric disorders are unknown. Second, attrition analysis showed that female participants had a higher follow-up rate, which might be over represented in this study, but the rates of treatment engagement in the present study were similar to an 11-year follow-up of the Australian Treatment Outcome Study . Third, results are based on a sample of individuals treated for OUD in community-based, federally regulated OTP clinics, and thus findings may have limited applicability to patients treated in primary care clinics or other settings. Fourth, we did not include sedative use , which is common in individuals with OUD and did not collect information about participants’ treatment for mental health disorders, both of which could have impacted treatment outcomes. Finally, substance use and treatment participation were self-reported and may be subject to recall bias. As for study strengths, this secondary analysis was conducted with a relatively large sample derived from a multi-site clinical trial and a follow-up prospective longitudinal study with a long duration to assess associations between OUD pharmacotherapy treatment outcomes and co-occurring psychiatric conditions. Our study sample has a similar rate of psychiatric disorders as has been reported in nationally representative data . Electronic -cigarettes are drug delivery devices primarily used for the inhalation of nicotine and marijuana, in the form of tetracannabinoids . The modern e-cigarette was invented in 2003, entered the global market in 2007, and has rapidly become popular across the world. There are many types of e-cigarettes, from cig-a-likes to vape pens and box Mods to pod-devices, but they all involve heating and aerosolization of e-liquids . The base ingredients of e-liquids, nicotine, propylene glycol and glycerin, have an unappealing flavor on their own such that chemical flavorants are added to >99% of e-liquids to increase the appeal to users. Use patterns of electronic -cigarettes and vaping devices differ greatly across age groups. Adults most commonly pick up vaping in the setting of conventional cigarette smoking, either adding it into their smoking practice or switching to e-cigarettes as a means to stop smoking. While 3.2% of all adults use ecigarettes, the rates are much higher in young adults 18-24 years-old, of whom 7.6% vape, and higher still in high school students, of whom 27.5% have used a vaping device within the past month. Sadly, middle school students as young as age 11 also have high rates of e-cigarettes use. While adult e-cigarette users are most often active smokers or ex-smokers, 44.3% of adolescents and young adults were never smokers prior to e-cigarette use. Of concern, it has been shown that e-cigarette use in never smokers leads to higher initiation of cigarette smoking, up to four-fold. A great deal of research to date has been focused on comparing e-cigarette use to cigarette smoking to assess the potential benefit of switching from smoking to vaping as a form of harm reduction, while less focus has been on the health effects of vaping in non-smokers, for whom the rates of vaping continue to rise, particularly in the youth.