Monthly Archives: December 2023

Several comorbid medical conditions are known to increase risk for ischemic stroke

Less progress has been made in the area of neuroprotection, and while there are many factors that may explain the gap between observations in preclinical studies and clinical trials, one important feature is that many laboratory models have not historically taken into account the many comorbidities suffered by patients who are at risk for or suffer stroke. Laboratory studies often use healthy young male animals, but stroke typically occurs in patients of both genders with risk factors such as advanced age, atherosclerosis, diabetes, hypertension and other conditions. Thus, translating a potential therapy observed in the laboratory could be limited when moved to clinical trials. With this in mind, investigations have now turned to models of disease comorbidities in order to simulate a more realistic clinical scenario. By studying models of diabetes, hypertension and other conditions, it is now possible to extend testing of certain therapeutics that appear promising in healthy animals to those with relevant disease. Further, it is well known that gender can influence outcome from stroke as well as response to treatment, and these disparities should be studied and determined. Here, we review the clinical perspective of stroke and how this might drive experimental studies. At the clinical level, the most effective treatment for stroke is prevention. Prevention is thought to entail reduction of stroke risk factors, but also prevention of future strokes once an initial stroke occurs . After an acute stroke happens, there are few clinically proven acute treatments. Pharmacological thrombolysis and mechanical thrombectomy are currently the only widely accepted interventions. Pharmacological thrombolysis, typically with recombinant tissue plasminogen activator is restricted to patients that can be treated within 4.5 hours of symptom onset, while those who meet criteria for mechanical thrombectomy may receive this intervention out to 24 hours.Unfortunately, these treatments are still limited to a relatively short time frame with only about 6% of acute stroke patients eligible for intravenous t-PA and about 10% eligible for mechanical thrombectomy.Aside from offering acute intervention, it is important to identify the underlying cause of the stroke,grow rack vertical also referred to as the stroke mechanism. Typical causes include atherosclerotic disease of the cerebral vessels or underlying cardiac disease which can predispose to thrombi that can then embolize.

A third type of cerebrovascular disease involving the small end vessels of the brain is another common cause of ischemic stroke, and this is often observed in the setting of poorly controlled hypertension and diabetes. Finally, cryptogenic strokes, which include embolic strokes of undetermined sources , constitute a substantial proportion of all strokes.Thus, an important part of the clinical management of stroke patients revolves around determining the cause of the stroke and modifying risk factors to prevent future occurrences. Age is a major risk factor for IS. IS risk increases after the age of 45 years, and over 70% of strokes occur after the age of 65. Elderly patients also have several comorbid risk factors for IS which not only increase stroke risk, but also increases risk for bad outcome.These include hypertension , diabetes mellitus , dyslipidemia, tobacco use, and obesity/ metabolic syndrome, and recreational drug use.There are also numerous genetic factors that increase stroke risk, some of which can be identified, and many others that are unknown. This review will focus on those which are generally thought to be modifiable either through lifestyle changes and/or pharmacological treatment.While hypertension is a major stroke risk factor at the clinical level, few preclinical studies of acute neuroprotection studied potential stroke treatments in hypertensive animals. However, some animal studies have been carried out in hypertensive animals. In a metaanalysis by O’Collins et al , an exhaustive review of over 3,000 animal studies and over 500 potential treatments. Amongst the studies reviewed, the authors found that only 10% of studies addressed the role of hypertension. Overall, the authors failed to find any direct neuroprotective effect of the hypertensive agents studied, but also that hypertensive animals sometimes responded differently to potential therapies, compared to normotensive animals. While many of potential neuroprotective treatments were ineffective in hypertensive animals, the authors also found divergent responses, where certain potential therapies were only effective in normotensive models , while other strategies were actually more effective in hypertensive models . Thus, hypertension should be considered in preclinical translational studies of stroke. A few clinical trials have also attempted to clarify a neuroprotective effect of antihypertensive agents in acute stroke, but have yet to demonstrate any efficacy. Early trials of calcium channel blockers such as nimodipine failed to show any neurological improvement,although these studies were criticized for initiating treatment rather late .

More recently, the Superselective Administration of VErapamil During Recanalization in Acute Ischemic Stroke study assess the therapeutic potential of verapamil.In this early phase I clinical trial, combined therapy of verapamil given immediately following thrombectomy in AIS patients was shown to be both safe and feasible, without significantly increasing intracranial hemorrhage or other adverse events, but failed to show any benefit with respect to improved neurological outcome. Hyperglycemia is well known to exacerbate stroke outcome,and diabetes is a common comorbidity in stroke patients. Hyperglycemia has also been shown to increase inflammatory responses through the fueling of the NADPH oxidase proinflammatory pathway and exacerbating oxidative stress through generation of superoxide. It has also been shown to exacerbate inflammatory responses in experimental diabetic stroke models and in stroke amongst patients with diabetes. Compared to the normoglycemic state, the expression of several inflammatory molecules including proinflammatory cytokines, cell adhesion molecules , chemokines, inducible nitric oxide synthase , cyclooxygenase-2 , NOX, and NF-κB were increased by hyperglycemia. This also led to increased leukocyte infiltration of the ischemic brain and activation of microglia. High serum levels of high mobility group box-1 was also observed in diabetic rats and correlated to worse stroke outcomes.In a clinical study, elevated serum HMGB1 levels has been associated with poor outcome in IS patients with DM.Since glucose is required in order for NOX to generate superoxide, hyperglycemia itself can lead to increased oxidative stress.Hyperglycemia is also known to increase brain hemorrhage in IS following treatment with tissue plasminogen activator .However, at the clinical level, intensive glucose control in the acute setting has not been shown to improve outcome from IS.Yet, treatments for long term glucose control in diabetes have shown some benefit in lowering stroke risk and improving stroke outcome. The UK Prospective Diabetes Study showed that metformin reduced the incidence of large vessel events, compared to other treatments.The PROactive study showed that pioglitazone, a peroxisome proliferator-activated receptor γ agonist, reduced recurrent stroke risk significantly in patients with type 2 diabetes.PPARγ agonists not only decrease serum glucose levels, but also inhibit inflammation. PPARγ activation in monocyte-derived macrophages is thought to influence macrophage polarization through an alternative or anti-inflammatory mechanism.PPARγ agonists has also been shown to reduce ischemia-induced inflammation and hemorrhagic transformation in experimental models.Further, pioglitazone has also been shown to possess antioxidant and anti-apoptotic effects in experimental stroke models of diabetes.Recent novel agents to treat diabetes such as dipeptidyl peptidase-4 inhibitor, glucagon like peptide-1 receptor agonist,and sodium glucose cotransporter 2 inhibitor may also have a role in secondary prevention in large vessel diseases.In particular, the DPP-4 inhibitor is thought to have pleiotropic effects against ischemic injury.

In a recent study, sitagliptin administration has shown to suppress the pro-inflammatory NF-κB signaling pathway,as well as anti-inflammatory, antioxidant, and anti-apoptotic effects in stroke models with diabetes.To date,grow racking the neuroprotective effects of these diabetes treatments have not been assessed in clinical trials, although the GLP-1 receptor analogues are thought to reduce IS risk. The SGLT2 and DPP-4 inhibitors have not been shown to be efficacious in recent clinical studies.Hyperlipidemia is another major comorbidity contributing to stroke risk. Elevated serum lipid levels can lead to atherosclerosis and narrowing or occlusion of the cerebral arteries. Lipid-lowering-therapy can significantly lower IS risk.3-Hydroxy-3-methylglutaryl-coenzyme A reductase inhibitors are frequently used for treatment of dyslipidemia and are nearly routinely prescribed for secondary stroke prevention. The Stroke Prevention by Aggressive Reduction in Cholesterol Levels study showed that atorvastatin treatment led to 16% relative risk reduction from IS. Although the benefit of IS risk reduction with statins was not achieved with LDL-C levels < 120mg/dl in a previous study, aggressive targeted LLT to reduce LDL-C levels to less than 70mg/dl led to 22% relative cardiovascular disease risk reduction.Thus, statins have been the preferred lipid lowering agent for primary and secondary IS prevention.Statins are also thought to have pleiotropic effects, in addition to their lipid lowering effects. Statins have been thought to have anti-thrombotic, anti-inflammatory, anti-oxidant, and neuroprotective effects.Specifically, statins have been shown to up-regulate endothelial NOS through the inhibition of isoproterinoids.A variety of statins have been shown to reduce infarct volume and improve neurological deficit in experimental stroke. Rosuvastatin was also shown to reduce ischemic injury by inhibiting oxidative stress and inflammatory responses by reducing superoxide and NOX, inhibiting microglial activation, and downregulating inflammatory molecules .Recently, ezetimibe and a PCSK9 inhibitor have been also shown to have efficacy in combination with statins in cardiovascular disease;however, its benefit in IS prevention and neuroprotection is still unclear. In addition to LLT, ω3-polyunsaturated fatty acid supplementation as an addition to statin therapy may also prove beneficial in stroke prevention. In particular, eicosapentaenoic acid has shown some benefit at the clinical level. The Japan EPA Lipid Intervention Study demonstrated the efficacy of EPA when added to statins pravastatin or simvastatin. In this study, investigators reported a 20% relative reduction in recurrent IS with EPA treatment among patients with a prior history of IS.Its beneficial effect is thought to be through esolvin and protectin, both ω3-PUFA metabolites, which have anti-inflammatory and anti-oxidant properties. Tobacco use is another significant risk factor which contributes to vascular disease, including stroke. A dose-response relationship between tobacco use in young IS patients has been described, although, interestingly, the relationship is less strong in older adults. Tobacco use leads to vascular endothelial damage and dysfunction through generation of ROS and activation of inflammation, both of which can increase atherosclerotic risk. Not only does tobacco use increase ROS generation, but it can also weaken antioxidant defense systems. Tobacco use promotes pro-inflammatory responses including leukocyte infiltration and activation of matrix metallo proteinases via cytokine signaling. Further, tobacco use can induce BBB dysfunction and worsen loss of cerebral blood flow during IS. In clinical studies, tobacco us was found to increase not only the incidence IS, but also HTf in the setting of anticoagulant use.Recently, current tobacco use has also been reported to increase the incidence of HTf in young patients with IS who have non-valular atrial fibrillation.Recreational drug use, while a significant public health problem for many reasons, is also associated with IS, particularly cocaine, methamphetamine, and cannabis.The toxicity of these drugs can induce not only IS but also hemorrhagic stroke. Hemorrhagic stroke via cocaine and/or methamphetamine use may lead to hypertension, vascular fatigue as a consequence of hypertension and tachycardia, and necrotising angiitis.In addition to chronic vascular damage and acceleration of atherosclerosis, use of these substances can lead to IS from acute cerebral vasospasm and vasculitis.Cannabis-associated stroke has been also reported, where most cases occurred after cannabis exposure or subsequent reexposure.While half of these cases had concomitant risk factors such as tobacco and alcohol,major causes of cannabis-related IS were thought to be due to acute cerebral angiopathy and vasospasm. Stroke prevention can be broadly categorized into primary and secondary prevention. Primary prevention refers to a series of lifestyle modifications and treatments in patients who have stroke risk factors but have not suffered stroke. Secondary prevention refers to similar changes and treatments but applied to patients who have already suffered stroke or transient ischemic attack . Many of these interventions overlap and will be discussed together. Lifestyle changes that can reduce stroke risk include modifications in diet, exercise, tobacco and elicit drug use cessation.For primary stroke prevention, cessation of smoking, regular physical activity , a healthy diet, moderate alcohol consumption, and maintaining a body mass index < 25kg/m2 has been shown to reduce stroke risk by as much as 80% compared to no lifestyle modifications.In contrast, the effectiveness of these modifications for secondary stroke prevention seems less robust.

Is Vertical Farming Eco Friendly

We also predicted that recent cocaine use would amplify the effects of historical use in HIV as assessed by traditional and ISDA memory metrics. Relative to other memory metrics, we posited that encoding would adversely be impacted given that both cocaine and HIV have been documented to impact frontostriatal networks . Data from 113 community-dwelling HIV infected participants were used for the current analysis. This sample was extracted from a prior study , which was funded by the National Institute on Drug Abuse . The initial study was adequately powered to detect the effects of . No control group was available in the sample . Participants were recruited from community health agencies in the Los Angeles area through fliers posted in infectious disease clinics at two University-affiliated medical centers. Exclusion criteria included meeting diagnostic criteria for lifetime or current history of psychotic spectrum or manic disorders and any neurological disorder other than HIV-infection . For our analyses, the participants were stratified by 1) recent cocaine use and 2) if participants ever met diagnostic criteria for cocaine dependence or abuse in their lifetime. The stratification for the recent cocaine abuse, participants 1) reported using cocaine within 4 weeks of testing and/or had positive urinalysis results for cocaine use on the day of testing or 2) denied cocaine use 4 weeks prior to testing and had negative cocaine urinalysis results . A period of four weeks was selected for self-report to ensure that there was sufficient time for such substances to adequately clear their system. Urine toxicology conducted on the day of testing, which screened for cocaine, amphetamine, cannabis,indoor growing racks and opiate metabolites. The stratification for presence or absence of lifetime cocaine dependence or abuse was determined by the Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorder 4th Edition, Clinical Version . HIV status was confirmed with ELISA and Western blot. Seventy percent of the HIV/ Coc+ , 63 percent of the HIV/Coc− , 65 percent of HIV/CocDx+ , and 72 percent of HIV/CocDx− participants met the Centers for Disease Control and Prevention diagnostic criteria for Acquired Immunodeficiency Syndrome .

There were no group differences in the proportion of participants meeting criteria for AIDS . All of the participants were on self-administered HAART at the time of testing. The CVLT-II is a verbal list-learning test comprised of 16 items that can be grouped into four semantic categories. The list is presented orally to participants over five learning trials, followed by a short-delay free and cued recall, long-delay free and cued recall, and recognition trials. The CVLT-II yields numerous indices. For the current study, we were interested primarily in the sum of items recalled across all five learning trials , short-delay free recall, and long-delay free recall. Item level data from the CVLT-II were evaluated via the Item Specific Deficit Approach , a quantitative process method for deriving indices of encoding, consolidation, and retrieval deficits. These indices have demonstrated increased sensitivity to cognitive impairment compared to traditional indices calculated from list-learning data . Furthermore, these indices seem to be less contaminated by other cognitive factors . The ISDA encoding deficit index is derived by summing the items that were not recalled at least three times during the five initial CVLT-II learning trials . The consolidation deficit index is calculated by summing the items that were recalled at least once during the list learning, but not recalled during either the short or long delay free or cued recall trials. The retrieval index is calculated via the sum of items that were recalled during list learning but recalled inconsistently across the short and long-delay free and cued recall trials . The consolidation index and retrieval index totals are divided by the number of items recalled at least once during the list learning trials to control for learning differences between groups.While the HIV/Coc+ and HIV/Coc− groups were well matched in terms of premorbid intelligence, age, education, sex, and history of depression , the HIV/Coc− group was comprised of a larger proportion of Caucasian participants than the HIV/Coc+ group. That said, ethnic/racial membership was not significantly associated with any of the dependent variables in the current study . Additionally, the HIV/Coc− had greater duration of lifetime cocaine and stimulant use, than the HIV/Coc+ group, while the HIV/Coc+ had a greater number of participants with positive cannabis toxicology. Total duration of cocaine and stimulant use was not associated with any of the dependent variables in the study . Similarly, positive cannabis toxicology was exclusive to HIV/Coc+ in our sample and was not associated with any of the dependent variables in the study .

While HIV/CocDx+ and HIV/CocDx− were well matched in terms of premorbid intelligence, age, education, sex, ethnicity, and history of depression , the HIV/CocDx− group had greater incidence of lifetime alcohol abuse and/or dependence and cannabis abuse and/or dependence than the HIV/CocDx+ group. That said, lifetime alcohol abuse and/or dependence and cannabis abuse and/or dependence were not significantly associated with any of the dependent variables in the current study . In order to determine the impact of cocaine use on CVLT-II performances, we performed 2 group X 2 cocaine use history ANOVAs for the CVLT-II total learning sum, CVLT-II short-delay free recall, CVLT-II long-delay free recall, and ISDA indices . A significance level of α <0.05 was used as the threshold for significance. To correct for family-wise error, we calculated q-values with a predetermined false discovery rate cut-off of .05 for each p value across all memory metrics .We hypothesized that 1) recent cocaine use will exacerbate encoding deficits and verbal memory difficulties in HIV infected individuals and 2) meeting lifetime diagnostic criteria for cocaine dependence and/or abuse would adversely impact memory performances, 3) recent cocaine use would amplify the effects of historical use in HIV, and 4) encoding would be adversely impacted over consolidation and retrieval. Our data partially supported these hypotheses. Specifically, using traditional verbal memory metrics that do not control for , recent cocaine use impacted total learning, short and long delay, and recognition discriminability, and interactive effects between recent use and historical use emerged only for the recognition discriminability metric. Specifically, the interactive effects revealed recognition discriminability performance differences in recent cocaine users with and without a lifetime history of cocaine abuse or dependence; the interaction further suggested that with abstinence, recognition discriminability performances of those with and without cocaine abuse and dependence are similar. Overall, the traditional memory metrics revealed general and non-specific findings that recent cocaine abuse and not historical use impacts all memory domains. However, given that traditional metrics do not account for inattention and dysexecutive symptoms , an alternative approach was employed as both cocaine and HIV influence frontostriatal networks subservient to these functions . Indeed, using the ISDA, our findings revealed that only encoding was compromised in the HIV/Coc+ group when compared to the HIV/Coc− group.

Additionally, using the ISDA, we found that historical cocaine use and/or dependence did not have an impact on encoding, consolidation, or retrieval. In contrasting traditional memory metrics and the ISDA, the ISDA memory metrics isolated statistically inferior performances to encoding, while the traditional metrics provided non-specific findings,micro green growing racks despite being statistically collinear to traditional memory metrics . Similar collinearity among memory metrics has been reported by other researchers . These findings speak to the incremental utility of the ISDA in an HIV setting and suggests that reduced memory performances in the context of HIV and cocaine use are likely secondary to disrupted learning, which is concordant with prior research suggesting frontostriatal involvement and reduced executive ability that likely impacts strategic encoding strategies . Of note, our findings are also in line with prior research supporting the notion that the deleterious effects of cocaine use on cognition in HIV are, at least to some extent, state-like in nature and transitory . Moreover, a recent longitudinal study found that in some cases, 1-year abstinence led to cognitive performances similar to that of healthy control participants . Our findings further add to the literature against the lasting impact of cocaine use ; however, given the cross-sectional nature of the study and a lack of healthy comparison subjects, we cannot affirm that reduced encoding will be fully ameliorated, although this study converges with other research that suggests performance comparable to HIV/Coc− groups. At first blush, our findings appear to differ from another study reporting no differences in verbal memory between HIV participants with and without cocaine use . However, in the Durvasula et al. study, HIV/Coc+ participants were younger and there is evidence to suggest that older individuals with HIV, even a remote history of stimulant use seems to have an effect as persons age . Additionally, the authors note that their findings may have been confounded by alcohol use and largely recreational drug users . The results of the current study underscore the importance of interventions that aid in the cessation of cocaine use among individuals with HIV. The immediate concern is that cocaine use could lead to worse cognitive status that could reduce HAART adherence. Indeed, declines in cognitive functioning , including memory , are associated with decreased medical compliance; this phenomenon also appears to be the case for cocaine use . This is a particularly vexing problem since cocaine-related verbal memory deficits may not only lead to worse functional outcomes, but active cocaine use may accelerate HIV replication , further highlighting the importance of targeted substance use/abuse interventions for persons with HIV. Our results have implications for clinical practice. Of notable importance, is the HIV/Coc+ memory profile of acquisition adjusted versus non-adjusted measurement of memory. Our findings highlight the importance of clinically correcting for inattention and a decline in executive ability , especially in the context of HIV and cocaine given the neurocircuitry involved . Relying solely on traditional metrics can be misleading, as seen in this study. Specifically, memory performances should adjust for acquisition, if memory is being evaluated . Additionally, HIV-related memory deficits are associated with decreases in medical compliance and daily functioning , so additional memory deficits caused by cocaine use could potentially exacerbate functional impairment in areas such as medication management, driving, employment, money management, cooking, and shopping . Additionally, our data suggest recommendations that specifically target the HIV-associated memory deficits, such as methods that improve the acquisition of new information rather than retrieval practice . Finally, our findings also offer hope and incentive to patients, suggesting that abstinence from cocaine may confer cognitive benefit. While our study suggests that recent, not past, cocaine use exacerbates verbal memory difficulties in HIV/AIDS via greater encoding deficits, our study also suffered from some limitations. The study did not include a healthy control group or a HIV-negative group with a history of cocaine use. Such comparisons would have allowed for a more detailed analysis and stronger conclusions about the independent, additive, and interactive effects of HIV serostatus and cocaine use. As such, this study can only speak to the effects of recent COC use in the setting of HIV and without a normative sample, differences observed between groups may not represent clinically relevant discrepancies . Nevertheless, discrepancies between healthy comparisons and participants with HIV are well documented . Our study evaluated the impact of cocaine use in HIV only on verbal memory and is a notable limitation, as other research has demonstrated discrepancies in attention, working memory, psychomotor speed, and executive functions . The present study utilized a cross-sectional design. A longitudinal and within-subjects design, in which participants’ performances are assessed while accounting for natural variation in the level of cocaine use over time, would have allowed us to better ascertain the impact of cocaine use on verbal memory in persons with HIV. Additionally, Finally, our methodology for determining history of cocaine use entailed self-report and urinalysis screen; while urinalysis is very accurate in this regard, it is only effective for a limited temporal window and, thus, we had to rely on self-reports regarding cocaine use that occurred several weeks prior to urine collection. That said, future research should also consider whether there are thresholds in terms of duration and severity of cocaine use that are related to long term declines in verbal memory in individuals with HIV. The incidence of HIV infection remains high among minority men who have sex with men.

What is the ideal drying and curing process for preserving the quality of cannabis buds?

The drying and curing process is critical for preserving the quality of cannabis buds and enhancing their flavor, potency, and overall experience. Here’s a step-by-step guide to the ideal drying and curing process:

Drying Process:

  1. Harvest at the Right Time:
    • Ensure that the cannabis plants are harvested at the optimal time, as discussed earlier, based on trichome development and pistil color.
  2. Trimming:
    • Trim excess leaves from the buds, but leave enough material to protect the trichomes. Some growers prefer to dry and trim later to preserve terpenes.
  3. Hang Drying:
    • Hang the trimmed buds upside down in a cool, dark, and well-ventilated space. Maintain a temperature between 60-70°F (15-21°C) with humidity around 45-55%.
  4. Avoid Direct Light:
    • Protect the drying buds from direct light, as this can degrade cannabinoids and terpenes.
  5. Proper Ventilation:
    • Ensure good air circulation to prevent mold. Use fans if needed, but avoid direct airflow on the buds.
  6. Drying Time:
    • The curing drying process typically takes 7-14 days. Buds are ready when the smaller stems snap instead of bend, and the exterior feels dry while maintaining some moisture inside.

Curing Process:

  1. Jar Curing:
    • After drying, transfer the buds to glass jars (mason jars work well). Fill the jars about 2/3 full to allow for air circulation.
  2. Burping Jars:
    • For the first week, “burp” the jars by opening them for a few minutes each day to release excess moisture. This helps prevent mold and promotes a more controlled curing process.
  3. Monitor Humidity:
    • Aim for a humidity level of 55-65% inside the jars. Use hygrometers to monitor humidity and adjust as needed. If humidity is too high, remove the buds for a short time to let them air out.
  4. Temperature Control:
    • Store jars in a cool, dark place with a consistent temperature between 60-70°F (15-21°C). Fluctuations in temperature and humidity can affect the curing process.
  5. Extended Curing:
    • While a minimum of two weeks is recommended for curing, many growers prefer a more extended curing period of 4-8 weeks or even longer. This extended curing can enhance the flavor, aroma, and smoothness of the buds.
  6. Quality Control:
    • During curing, periodically check the buds for any signs of mold, off smells, or other issues. Remove any affected buds immediately.
  7. Final Testing:
    • After the desired curing period, the buds are ready for consumption. Properly cured cannabis should have a well-balanced flavor, enhanced aroma, and a smooth smoke.

Remember that the ideal drying and curing process can vary based on the strain, environmental conditions, and personal preferences. Experimenting with different drying and curing times can help you discover the optimal conditions for preserving the quality of your cannabis buds.

How does the choice of growing medium impact the overall success of indoor cannabis cultivation on a commercial scale?

The choice of growing medium plays a crucial role in the overall success of indoor cannabis cultivation on a commercial scale. Different growing mediums offer distinct advantages and considerations, influencing factors such as plant health, nutrient absorption, water retention, and overall yield. Here are several ways in which the choice of growing medium impacts commercial cannabis cultivation:

  1. Nutrient Availability:
    • Soil: Traditional soil provides a natural and complex ecosystem for plant growth. The nutrient content in soil can vary, and organic soils can contain beneficial microorganisms. However, nutrient levels need careful monitoring and adjustment.
    • Coco Coir: Coco coir is a popular hydroponic substrate that offers good water retention and aeration. It’s often used with hydroponic nutrient solutions, providing more control over nutrient levels than soil.
    • Hydroponics: Hydroponic systems use nutrient-rich water solutions to deliver essential elements directly to plant roots. This allows for precise control over nutrient levels, promoting faster growth and potentially higher yields.
  2. Water Retention and Drainage:
    • Soil: Soil retains water well but can vary in drainage capabilities. Proper drainage is crucial to prevent overwatering and root diseases.
    • Coco Coir: Coco coir has excellent water retention and drainage properties. It can hold water like soil but allows excess water to drain away, preventing waterlogged conditions.
    • Hydroponics: Hydroponic systems provide optimal water and nutrient delivery directly to plant roots. However, maintaining proper nutrient levels is critical,rolling grow racks and excess water must be efficiently removed.
  3. pH Management:
    • Soil: Soil pH can impact nutrient availability. It’s important to monitor and adjust soil pH to ensure plants can absorb nutrients effectively.
    • Coco Coir: Coco coir tends to have a neutral pH, but adjustments may still be necessary over time, especially as coco coir ages.
    • Hydroponics: pH levels in hydroponic systems must be carefully managed to ensure nutrient availability. Automated pH systems are often used to maintain optimal levels.
  4. Aeration and Oxygenation:
    • Soil: Soil provides natural aeration, allowing oxygen to reach the roots. Proper aeration is crucial for root health.
    • Coco Coir: Coco coir offers good aeration, promoting oxygenation of the root zone. This can enhance nutrient uptake and reduce the risk of root diseases.
    • Hydroponics: Hydroponic systems provide direct oxygenation to the roots. Oxygen levels are crucial for preventing root rot and ensuring optimal nutrient absorption.
  5. Disease and Pest Resistance:
    • Soil: Healthy soil ecosystems can contain beneficial microorganisms that contribute to disease resistance. However, soil can also harbor pests and pathogens.
    • Coco Coir: Coco coir has fewer inherent pests and diseases compared to soil but may still require vigilant management.
    • Hydroponics: Hydroponic systems, when properly maintained, can minimize the risk of soil-borne diseases and pests. However, waterborne issues must be addressed.
  6. Consistency and Control:
    • Soil: Soil can vary in composition and nutrient content, leading to less precise control over growing conditions.
    • Coco Coir: Coco coir provides more consistency and control than soil, especially in terms of nutrient delivery.
    • Hydroponics: Hydroponic systems offer the highest level of control over nutrient levels, pH, and growing conditions, promoting consistent and optimized plant growth.

Ultimately, the choice of growing medium should align with the specific needs, resources, and goals of the commercial cannabis cultivation operation. Regular monitoring, careful management, and adaptation to the unique characteristics of each medium are essential for achieving success on a large scale.

How do automated systems contribute to the efficiency and consistency of large-scale indoor cannabis cultivation?

The goal of this study was to compare the prevalence rates of loneliness between individuals who are dependent on methamphetamine and those who are not dependent on methamphetamine, and evaluate the impact of loneliness on risky sexual beliefs and poor intentions to practice safer sex. We hypothesized that individuals who are dependent on methamphetamine would have higher rates of loneliness than those not dependent on methamphetamine. Furthermore, we hypothesized that loneliness would be associated with riskier beliefs and riskier intentions about sex, such that individuals with high rates of problematic loneliness would endorse poorer personal norms about practicing safer sex and poorer intentions to practice safer sex, particularly among methamphetamine dependent individuals. The study was conducted at the University of California San Diego Translational Methamphetamine AIDS Research Center from June 2014 to June 2017 after receiving approval from its Institutional Review Board. The current study was conducted as a sub-study of a large, center-wide project examining the intersection of methamphetamine and HIV on the central nervous system and behavior, particularly given the established independent effects of methamphetamine and HIV on the CNS , as well as the link between methamphetamine use and HIV risk . This larger project sought to study individuals whose methamphetamine use exceeded a particular exposure-threshold to methamphetamine ; however, recruitment was kept broad to best generalize findings. Therefore, participants were not required to be in a particular stage of their addiction to participate . Rather, participants were recruited from the greater San Diego community, a primarily an urban city located near the southwest U.S. border. Specific recruitment locations included substance abuse treatment programs, HIV clinics, and the broader community. Recruitment methods included hosting community education events, using social marketing to promote the research study, and engaging in a wide range of community outreach venues . After providing written, informed consent,greenhoues growing racks participants underwent a comprehensive, standardized neurobehavioral and neuromedical assessment. Inclusion criteria were broad and encompassed any individuals aged 18 or older from the local community who were able to complete in-person study assessments. Exclusion criteria included prior histories of neurological or severe psychiatric conditions that are independent of methamphetamine and/or HIV infection.

Participants included 115 English-speaking adults stratified by whether or not they met the Diagnostic and Statistical Manual of Mental Disorders 4th edition criteria for lifetime methamphetamine dependence and methamphetamine abuse or dependence ≤ 18-months prior to study enrollment . If individuals did not meet both these criteria, then they were placed in the control group: . The DSM-IV was used to assess participants instead of the DSM-5 in order to maintain protocol consistency with other ongoing research projects that were being conducted at our center and had already been developed and executed prior to DSM-5 publication. Thus, the use of DSM-IV allowed for comparisons between other center-wide studies and their cohorts. Criteria for the METH+ group were chosen in order to capture individuals who experienced both severity and recency of methamphetamine use-related problems. An 18-month time frame for classifying recent/current methamphetamine abuse or dependence was selected to match DSM-IV’s clinical diagnostic time frame as closely as possible, while also balancing the feasibility of participant recruitment. Three individuals in the METH− group reported limited and/or remote methamphetamine use, which did not meet DSM-IV criteria for abuse or dependence. All three individuals were older adults with at least a high school education. Two of the three individuals were Black , two were HIV−, and two were male. The age of first methamphetamine use among these three individuals ranged from 30–41 years , their time since last methamphetamine use ranged from about 1-month to > 25 years , their total lifetime methamphetamine use ranged from about 3- to 9-months , and their total lifetime quantity of methamphetamine use ranged from about 6 g to 1.8 kg . Individuals meeting criteria for non-methamphetamine substance dependence could have been enrolled in the study if they last met criteria > 5 years prior to study enrollment. Similarly, individuals meeting non-methamphetamine substance abuse criteria could have been enrolled in the study if they last met criteria > 12-months prior to study enrollment. Due to high prevalence of alcohol and cannabis use histories in our overall sample , individuals meeting criteria for alcohol or cannabis abuse or dependence were enrolled, provided that criteria for dependence had last been met > 12-months prior to study enrollment. Per these criteria, only one person in each METH− and METH+ group met criteria for current, non-methamphetamine substance use disorder . In the overall sample, 58% were people with HIV, which was established by self-report and confirmed by the Miriad HBc/HIV/HCV finger stick point-of-care test . Table 1 summarizes other relevant participant characteristics in each METH group, and Table 2 summarizes the DSM-IV-TR diagnostic criteria for methamphetamine abuse and dependence. Demographic characteristics and scores on the aforementioned behavior scales were compared between the METH−/+ groups using t-tests for continuous variables and Pearson Chi 2 tests, or Fisher’s Exact Test for dichotomous or nominal variables. Non-parametric Spearman’s ρ was used to examine univariable associations with loneliness given the non-normal distribution of scores.

Multi-variable linear regression was used to investigate the association between loneliness and METH−/+ groups, as well as the association between loneliness and sexual risk norms and intentions. Participant characteristics from Table 1 were selected as covariates in the multi-variable regression if they significantly differed between METH−/+ groups using a critical α-level of .05 . Lifetime history of mood disorder and social support network were also considered as covariates given their significant associations with the primary outcome variables . Due to high collinearity between premorbid verbal IQ and years of education, the former was selected as the model covariate given its robustness to other potential confounds . Though HIV serostatus and neurocognitive impairment were unrelated to loneliness in our overall sample, they were selected as covariates because of prior literature supporting their associations with loneliness . In our overall sample, people with HIV also were more likely to have engaged in riskier sexual behaviors over the past 6-months and prior to the past 6-months compared to HIV− individuals. Additionally, although 98.5% of people with HIV were on antiretroviral therapies and 82.3% had undetectable viral loads that were unassociated with poorer norms or poorer intentions to practice safer sex, HIV+ serostatus was associated with poorer intentions to practice safer sex in the whole sample, thereby providing support to include HIV serostatus as a covariate during analyses. Statistical significance was determined using a critical α-level of .05 for all analyses. The Spearman ρ correlations between loneliness and continuous variables in METH− and METH+ groups are listed in Table 3, as well as results of one-way ANOVAs comparing loneliness across categorical variables. There was a significant, omnibus difference in loneliness across people of different ethnicities in the METH−, but not in the METH+ group. Among METH− individuals, Black participants reported the highest loneliness , while other/Hispanic participants were the least lonely . Those with a lifetime history of mood disorder reported significantly higher loneliness compared to those without , but only within the METH− group. In the METH+ group, people with HIV were significantly lonelier than HIV− individuals ; there was no HIV effect on loneliness in the METH− group. The only non-methamphetamine use disorder that was associated with loneliness was lifetime history of opioid use disorder; however, this association was only noted in the METH+ group. Specifically, those in the METH+ group with a lifetime history of opioid use disorder reported lower loneliness relative to METH+ individuals without lifetime history of opioid use disorder. Greater norms about practicing safer sex and greater intentions to practice safer sex were each significantly associated with lower current sexual risk in both groups . In a multi-variable regression with only METH+ individuals, potentially problematic levels of loneliness and neurocognitive impairment remained significantly associated with poorer beliefs about practicing safer sex after controlling for age, HIV serostatus, lifetime history of mood disorder, lifetime history of other substance use disorder, age of first methamphetamine use, and total number of people in social support network . In METH− individuals, this model was not significant. Findings remained unchanged when lifetime history of other substance use disorders was replaced with the more specific lifetime history of opioid use disorder variable. In addition,grow drying rack our results held regardless of the recency of participants’ methamphetamine use, which could also serve as a proxy for recruitment source . In METH+ individuals, potentially problematic levels of loneliness and HIV+ serostatus remained significantly associated with poorer intentions to practice safer sex after controlling for age, neurocognitive impairment, premorbid verbal IQ, lifetime history of mood disorder, lifetime history of other substance use disorders, age of first methamphetamine use, and total number of people in social support network . In METH− individuals, this model was not significant. Again, findings remained unchanged when lifetime history of other substance use disorders was replaced with lifetime history of opioid use disorder, and when recency of methamphetamine use was considered.

Given prior literature suggesting the role of impulsivity/disinhibition in the relationship between loneliness and sexual risk behavior , the effect of impulsivity/disinhibition on poorer beliefs about and intentions to practicing safer sex was also considered. In the overall sample, higher impulsivity/disinhibition was associated with having potentially problematic levels of loneliness , regardless of methamphetamine status. However, impulsivity/disinhibition did not reach statistical significance when predicting beliefs about practicing safer sex or intentions to practice safer sex after controlling for HIV serostatus, methamphetamine status, potentially problematic loneliness, and the interaction between methamphetamine status and potentially problematic loneliness. The interaction between methamphetamine status and potentially problematic loneliness was a significant contributor of both beliefs about practicing safer sex and intentions to practice safer sex . HIV-positive serostatus was also a significant contributor of poorer intentions to practice safer sex , but not beliefs about practicing safer sex. There was no significant interaction between HIV and methamphetamine status on intentions to practice safer sex. Figures 2 and 3 illustrate the interactive effect between potentially problematic loneliness and methamphetamine on norms and intentions to practice safer sex. Our findings indicate that individuals with methamphetamine use disorders reported higher levels of loneliness than those in a METH− comparison group, consistent with prior reports , although previous work is limited. Our cross-sectional results cannot disentangle directionality of this relationship, but it is presumed to be bidirectional: loneliness may predispose people to methamphetamine use, and methamphetamine use may have severe social consequences that contribute to loneliness. Increased loneliness in METH+ compared to METH− individuals held despite comparable levels of exposure to factors associated with loneliness . In fact, 50% of the METH+ group met criteria for problematic levels of loneliness, highlighting a potential need for psychological intervention, compared to about 30% in the METH− group. The presence of loneliness also may result in poorer personal norms and intentions about practicing safer sex, possibly as a means of emotional pain and distress avoidance through pleasure-seeking behaviors, resulting in the potential to engage in unsafe sexual behaviors. As such, loneliness can have indirect, downstream public health implications in a population already at high risk for engaging in riskier behaviors . In the general population, loneliness is a significant risk factor for many negative health consequences such as cognitive decline and cardiovascular risk . In people with HIV, loneliness has been associated with poorer immune function , more depressive symptoms , and lower CD4+ count . Recent work has highlighted the association between loneliness and substance use and dependence such as opioid use disorder , alcohol use disorder , and co-occurring addictive disorders such as gambling and internet addiction . Our study extends this research to illustrate that among individuals who are dependent on methamphetamine, loneliness is also associated with riskier beliefs and intentions about practicing safer sex, above and beyond the impact of other pertinent factors. The higher prevalence of loneliness in methamphetamine users, and its association with riskier beliefs and intentions about practicing safer sex, may be explained by some of the following aspects specific to methamphetamine addiction : positive reinforcement , negative reinforcement , inhibitory control dysfunction , incentive salience , and stimulus response learning .

Elevate Your Harvest: The Ultimate Guide to Indoor Cannabis Growing

Nine percent of South African adolescents report being given illicit substances on school property , and school-based initiatives have had documented success in preventing alcohol and cigarette in South Africa . Implementing interventions near heroin hangouts reduced injection heroin use in Tanzania . Schools and the other settings where people are smoking heroin in South Africa may be important sites for intervention. Because MAT prevents initiation of injection drug use by decreasing the number of active heroin initiators and behavioral interventions may deter people who inject heroin from initiating others , MAT and/or behavioral interventions targeting people who smoke heroin may also reduce the number of smoked heroin initiators and initiations. Although the largest qualitative study of smoked heroin in South Africa to date, this study has limitations. Participants were men in substance use treatment who smoked heroin, and caution must be taken in generalizing these findings to other users of heroin, including people who inject heroin and women. Although participants commented on female trajectories into smoked heroin use and the unique role of intimate partners for initiating women, we were unable to access women’s lived experiences firsthand. Because of the interplay between gender and substance use , sex , and HIV transmission in Africa , research and interventions will likely need to tailored to women. Participants’ experiences may also differ from people who are actively smoking heroin, some of whom may lack the ability to pay for or otherwise access substance use treatment. All participants completed at least some secondary school, and any planned school-based prevention program may not be accessible to people at risk of initiating smoked heroin who do not attend school. Lastly, as a qualitative study, certain important details regarding initiation of the drug – such as age at first use and duration of substance use – were not captured systematically. It will be useful to explore these and other factors in epidemiological studies. Notwithstanding these limitations, we identified several pathways to smoked heroin use in South Africa and characterized social influences on initiation. With many parallels to other heroin epidemics in Africa,wholesale vertical grow manufacturer international cooperation and coordination in the scale-up of MAT, harm reduction services, and other resource-conscientious interventions are recommended.

However, interventions targeting injection drug use, in this context, may not reach people who smoke heroin. Only by understanding the risk environments of smoked heroin will we be able to target interventions to prevent smoked heroin use, prevent transition to injection use, and mitigate other social harms affecting the people who smoke heroin and their communities.Adolescence is a developmental period between childhood and adulthood characterized by marked physiological, psychological, and behavioral changes. Adolescents experience rapid physical growth, sexual maturation, and advances in cognitive and emotional processing . These changes coincide with increases in substance use, with alcohol being the most widely used illegal substance among adolescents . National survey data indicate that 33% of 8th grade students have tried alcohol, and this percentage increases to 70% among 12th graders . Of greater concern is the increase in heavy episodic drinking where prevalence rates increase from 6% to 22% for 8th and 12th grades, respectively , as heavy episodic drinking during adolescence is associated with numerous negative effects on adolescent health and well being, including risky sexual behaviors , hazardous driving , and alterations in adolescent brain development . During adolescence, the brain undergoes significant changes, and a recent longitudinal neuroimaging study suggests that heavy episodic drinking during this developmental period alters brain functioning . Squeglia and colleagues examined the effects of heavy episodic drinking on brain function during a visual working memory task, comparing brain activity in adolescents at baseline and again at follow-up to compare brain activity in those who transitioned into heavy drinking during adolescence to demographically matched adolescents who remained nondrinkers. Adolescents who initiated heavy drinking exhibited increasing brain activity in frontal and parietal brain regions during a visual working memory task compared to adolescents who remained nondrinkers through follow-up, who showed decreasing frontal activation, consistent with studies in typical development . Thus, adolescent heavy episodic drinking may alter brain functioning involved in working memory; however, additional longitudinal studies are needed to explore the effects of alcohol on neural correlates of other vital cognitive processes, such as response inhibition. Response inhibition refers to the ability to withhold a prepotent response in order to select a more appropriate, goal-directed response . The neural circuitry underlying response inhibition develops during adolescence , and as such, brain response during inhibition changes during adolescence . Briefly, cross-sectional research indicates that brain activation during response inhibition transitions from diffuse prefrontal and parietal activation to localized prefrontal activation .

Longitudinal studies report that atypical brain responses during response inhibition, despite comparable performance, is predictive of later alcohol use , substance use and dependence symptoms , and alcohol-related consequences . Together, these findings indicate that neural substrates associated with response inhibition change over time and abnormalities in development may contribute to later substance use. To this end, the current longitudinal fMRI study examined the effects of initiating heavy drinking during adolescence on brain activity during response inhibition. We examined blood oxygen level dependent response during a go/no-go response inhibition task prior to alcohol initiation , then again on the same scanner approximately 3 years later, after some adolescents had transitioned into heavy drinking. Based on our previous findings , we hypothesized that adolescents who transition into heavy drinking would show reduced BOLD response during response inhibition prior to initiating heavy drinking followed by increased activation after the onset of heavy episodic drinking, as compared to adolescents who remained non-users. By identifying potential neurobiological antecedents and consequences of heavy episodic drinking, this study will extend previous research on the effects of alcohol on brain function and point to risk factors for heavy episodic drinking during adolescence. The present longitudinal neuroimaging study examined the effects of initiating heavy drinking during adolescence on brain responses during response inhibition. We hypothesized, based on previous findings , that adolescents who transition into heavy drinking would show reduced BOLD response during response inhibition prior to initiating heavy drinking followed by increased activation after the onset of heavy episodic drinking, as compared to adolescents who remained non-drinkers. Examining a longitudinal neuroimaging sample of youth both preand post-alcohol use initiation allowed us to address the etiology of neural pattern differences. Although group × time effect sizes were small, our findings suggest that differential neural activity patterns predate alcohol initiation and also arise as a consequence of heavy drinking. We found significant drinking status × time interactions in a number of distinct and reproducible brain regions commonly associated with response inhibition. Prior to initiating substance use, adolescents who initiated heavy use showed less BOLD activation during inhibitory trials in frontal regions, including the bilateral middle frontal gyri, and non-frontal regions, including the right inferior parietal lobule, putamen, and cerebellar tonsil, compared with those who continued to abstain from alcohol use.

This pattern of hypoactivity among youth who later initiated heavy drinking during response inhibition is consistent with studies showing decreased activity during response inhibition predicts later alcohol use and substance use . Indeed, change in BOLD response contrast over time in the right middle frontal gyrus was associated with lifetime alcohol drinks at follow-up. Together, these findings provide additional evidence for the utility of fMRI in identifying neural vulnerabilities to substance use even when no behavioral differences are apparent. At follow up, adolescents who transitioned into heavy drinking showed increasing brain activation in the bilateral middle frontal gyri, right inferior parietal lobule, and left cerebellar tonsil during inhibition; whereas, non-drinking controls exhibited decreasing brain activation in these brain regions. These regions have been implicated in processes of stimulus recognition, working memory, and response selection ,wholesale vertical grow supplier all of which are critical to successful response inhibition. Indeed, neuroanatomical models of inhibitory control highlight the importance of frontoparietal attentional control and working memory networks . These models posit that inhibition and cognitive control involve frontoparietal brain regions when detecting and responding to behaviorally relevant stimuli. Thus, findings suggest that heavy drinkers recruit greater activity in these neural networks in order to successfully inhibit prepotent responses. Given the longitudinal nature of the current study, it is important to consider our findings in the context of typical adolescent neural maturation. During typical neural maturation, adolescents exhibit less activation over time, as neural networks become more refined and efficient . This typical pattern of neural maturation occurred among adolescents who remained nondrinkers. Adolescents who transitioned into heavy drinking showed the opposite pattern – increasing activation despite similar performance, suggesting that alcohol consumption may alter typical neural development. The current findings should be considered in light of possible limitations. Although heavy drinking and non-drinking youth groups were matched on several baseline and follow-up measures, heavy drinking youth reported more cannabis, nicotine, and other illicit drug use at follow-up. Differential activation remained significant after statistically controlling for lifetime substance use and such differences may contribute to our findings. Further, simultaneous substance use might be associated with these results. Future research should explore the effects poly substance use during the same episode compared to the effects of heavy drinking on neural responses. It is also important to note that adolescence is a period of significant inter-individual differences in neural development, and as such, we matched self-reported pubertal development and age at baseline and follow-up to address this issue.

For the current sample, histograms of age distributions at baseline and follow-up are provided in Online Resource 1. Again, our groups were well matched on these variables; however, additional longitudinal research to examine the effects puberty and hormonal changes on neural functioning and response inhibition are needed. In summary, the current data suggest that pre-existing differences in brain activity during response inhibition increase the likelihood of initiating heavy drinking, and initiating heavy alcohol consumption leads to differential neural activity associated with response inhibition. These findings make a significant contribution to the developmental and addictive behaviors fields, as this is the first study to examine neural responses differences during response inhibition prior to and following the transition into heavy drinking among developing adolescents. Further, we provide additional support for the utility of fMRI in identifying neural vulnerabilities to substance use even when no behavioral differences are apparent. Identifying such neural vulnerabilities before associated behaviors emerge provides an additional tool for selecting and applying targeted prevention programs. Given that primary prevention approaches among youth have not been widely effective, it is possible that targeted prevention programs for youth who are at greatest neurobiological risk could be a novel, effective approach. As such, our findings provide important information for improving primary prevention programs, as well as answering the question of whether neural differences predate alcohol initiation or whether differences arise as a consequence of alcohol use. Although researchers in sociology, cultural studies, and anthropology have attempted, for the last 20 years, to re-conceptualize ethnicity within post-modernist thought and debated the usefulness of such concepts as “new ethnicities,” researchers within the field of alcohol and drug use continue to collect data on ethnic groups on an annual basis using previously determined census formulated categories. Researchers use this data to track the extent to which ethnic groups consume drugs and alcohol, exhibit specific alcohol and drug using practices and develop substance use related problems. In so doing, particular ethnic minority or immigrant groups are identified as high risk for developing drug and alcohol problems. In order to monitor the extent to which such risk factors contribute to substance use problems, the continuing collection of data is seen as essential. However, the collection of this epidemiological data, at least within drug and alcohol research, seems to take place with little regard for either contemporary social science debates on ethnicity, or the contemporary on-going debates within social epidemiology on the usefulness of classifying people by race and ethnicity . While the conceptualization of ethnicity and race has evolved over time within the social sciences, “most scholars continue to depend on empirical results produced by scholars who have not seriously questioned racial statistics” . Consequently, much of the existing research in drug and alcohol research remains stuck in discussions about concepts long discarded in mainstream sociology or anthropology, yielding robust empirical data that is arguably based on questionable constructs .

The Green Thumb’s Guide to Successful Cannabis Cultivation

Two models were run, evaluating percent heavy drinking days and the average number of drinks per week in the 4 weeks following the intervention or matched-control. Both models controlled for age, sex, cigarette smoking status, positive urine THC, and baseline percent heavy drinking days or average drinks per week depending on the drinking outcome model. Z-statistic images were thresholded with cluster-based corrections for multiple comparisons based on the theory of Gaussian Random Fields with a cluster-forming threshold of Z > 2.3 and a corrected cluster-probability threshold of p < 0.05 . This study examined the effect of a brief intervention on drinking outcomes, neural alcohol cue-reactivity, and the ability of neural alcohol cue-reactivity to predict drinking outcomes. Results did not find an effect of the brief intervention on alcohol use in this sample, and the intervention was not associated with differential neural alcohol cue reactivity. Exploratory secondary analyses revealed inverse relationships between differential neural activity in the precuneus and medial frontal gyrus in relation to alcohol-related outcomes, but these relationships were across conditions. The lack of main effect of intervention on either drinking outcomes or on neural alcohol cue reactivity is contrary to the study hypothesis whereby individuals assigned to the brief intervention condition were expected to show greater reductions in alcohol use compared to a no-intervention control condition . In the present study, reductions in alcohol use were observed for both conditions and it appears that simply participating in an alcohol research study at an academic medical center prompted notable behavioral changes. Reductions in drinking following study participation may be attributable to assessment reactivity, in which participants curb drinking after completing alcohol-related assessments and interviews . This phenomenon has been well-documented across several assessment modalities ,vertical grow tables including the AUDIT and TLFB interviews, which were used in the present study. In addition, recent studies have highlighted the fact that single session interventions, while efficacious in relatively large RCTS, have modest effect sizes .

As such, the present study may have been under powered to detect small effects sizes, which may account for the null findings regarding intervention effects on drinking outcomes. Future studies are encouraged to recruit larger samples of non-treatment seeking participants to better detect small effects. Furthermore, this finding should be considered in light of the sample, which was comprised of non-treatment seekers from the community, which is not the typical sample evaluated in brief intervention research. However, non-treatment seeking individuals with similar alcohol use characteristics are open to participating in brief interventions . Also of note the drinking outcomes in this study were evaluated using variables derived from the TLFB as the primary outcome measure. There is some evidence that some individuals under-report substance use when the TLFB is administered by an interviewer rather than a computer , potentially due to a social desirability bias in which participants wish to appear favorably to the interviewer. In the present study, the TLFB assessment was conducted by a trained research assistant and not the clinician who delivered the brief intervention in order to reduce this bias. However, the TLFB is a retrospective self-report measure and as such is subject to limitations including inaccuracies in participant recall. Alcohol use was also not biologically verified in this study. In light of the null findings regarding intervention effects on drinking in this study, it is perhaps not surprising that intervention condition was not associated with differences in neural cue reactivity in this sample. While it has been argued that neuroimaging techniques may be sensitive to mechanisms of behavior change , in the present study, neural processing of alcohol taste cues was no more sensitive to intervention effects than traditional measures of drinking outcomes. It should be noted however, that the alcohol taste cues task used in this study was abbreviated from its original version in order to increase the number of trials without substantially increasing scan duration. Additionally, the current version of the task used water as a control condition, while the original version employed an appetitive control condition in the form of litchi juice.

While the present version was recently validated in a separate sample , it may not have recruited the reward circuitry in response to alcohol cues as robustly as its previous iteration. Importantly, it should be noted that across both conditions, exposure to alcohol taste resulted in increased activation in frontal and limbic regions, compared to water taste, suggesting the task was fundamentally internally valid. Nevertheless, the magnitude of the activation may have been more limited due to the combination of the shortened trial duration and use of a non-appetitive control thus hindering efforts to detect intervention effects on neural processing of alcohol cues. Considered together, both factors likely posed significant challenges to the primary aims of the study, which fundamentally represented an interaction effect between treatment type and cue type. Given this, large magnitude main effects for both experimental factor would be optimal to bring the interaction into sharpest relief. Thus, the relatively modest effect size of the intervention and the sufficient but potentially smaller effects in the neuroimaging paradigm constrained the experimental tests. Future studies using neuroimaging to understanding brief interventions will require at least substantially larger sample sizes for a detectable clinical effect and potentially different neuroimaging paradigms. Regarding the prediction of drinking outcomes, the most compelling finding in the present study is that activation to alcohol tastes in the precuneus and medial frontal gyrus was negatively associated with percent heavy drinking days. The effect was such that individuals who had greater neural reactivity to alcohol taste in the precuneus and prefrontal cortex had fewer percent heavy drinking days in four weeks following the fMRI scan. Likewise, across groups, activation to alcohol tastes in the precuneus was negatively associated with average drinks per week. This pattern of results suggests that greater activation of the precuneus and frontal cortex during neural processing of alcohol taste cues, compared to control cues, predicts less drinking in the subsequent month. This effect was found across conditions, control and experimental, and is generally consistent with previous work suggesting that the precuneus is sensitive to changes in cue reactivity and possibly to changes in addiction severity .

The precuneus has also been implicated in a meta-analytic review of functional neuroimaging studies of alcohol cue reactivity . Thus the implication of precuneus activation as a predictor of subsequent drinking in the real world extends this line of research and suggests that this region may serve as an intervention target, particularly with regard to the salience of alcohol cues. Although the vast majority of neuromodulation studies to address motivation in addiction have focused on the frontal lobes , and dorsolateral prefrontal cortex in particular, recent investigations have shifted attention to the precuneus , with some success. This prospect is particularly exciting in the context of psychological interventions. The precuneus has been functionally implicated in self-related cognition , which in many cases is essential for behavioral interventions to have an impact. For example, in the context of a brief intervention, a person must encode the factual information provided and square it with their own self perceptions. Furthermore, in the current study’s intervention, participants were specifically asked what they wanted to do next and this necessarily demands meaningful self-related cognitive processing to generate behavior change. To illustrate this by contrast, we would have no expectation that a brief intervention would have a meaningful impact for a hypothetical individual who had no capacity to think abstractly about him or herself . Thus, self-related cognition is a necessary elementary information processing capacity for this type of intervention to be useful and the current study suggests that the extent to which this was engaged was associated with a more favorable outcome. Of course, this interpretation requires considerable caution because it is inherently conjecture and the precuneus has been implicated in a number of other cognitive functions. A recent review of psychosocial interventions for addiction medicine identified increased recruitment of self-referential processing regions, including the precuneus and medial prefrontal cortex, in response to targeted motivational interventions . Additionally, in cannabis users, greater precuneus activation during a motivational interviewing intervention was associated with a reduction in cannabis problems at follow-up ; further indicating that activation of self-referential processing circuitry may be important for treatment response. Other psychological interventions, including cue-extinction and episodic future thinking training,vertical grow trays may be successful at increasing self-related cognition through precuneus activation. Precuneus activation has been demonstrated in cigarette smokers who were told to engage in self-focused coping during a cue-exposure task , indicating the interventions targeting self-focused coping during exposure to drug cues may effectively activate this brain region. Exposure to episodic future thinking activates the precuneus and mPFC and results in alcohol dependent individuals increasing their valuation of future monetary rewards while lowering demand intensity for alcohol rewards . Frontoparietal circuitry, including the precuneus, is activated when participants make voluntary choices to cognitively reappraise craving responses or freely view craving cues . Of note, the precuneus is not neuroanatomically uniform, with distinct functional subregions according to both the anterior-posterior and dorsal-ventral axes, and distinct patterns of functional connectivity by subregion . The current study reveals associations for the precuneus in general, but cannot speak to subregional activation. In sum, the current study sought to examine whether a brief intervention would reduce both drinking and alcohol motivation as measured by neural reactivity to alcohol cues and neither hypothesis was supported.

This conclusion, however, must be tempered by effect size considerations for both the intervention and the paradigm, as well as the apparently substantial reactivity effects present in the control condition. Each of these has important methodological implications for future studies of the neural mechanisms of alcohol-related behavior change. In addition, independent of intervention, exploratory analyses revealed differential neural reactivity that predicted more favorable outcomes, particularly in the precuneus, suggesting that is a promising neural substrate warranting further study in this line of inquiry. The worldwide crisis related to the non-medical use of opioids continues, as reflected in high rates of deaths that are associated with prescription opioids, recreational drugs such as heroin and increasingly illicit supplies of fentanyl and analogs . The number of individuals seeking treatment for opioid dependence is high . Improvement in understanding the risk factors for opioid addiction, as well as the behavioral and neurobiological consequences of opioid exposure, may enhance the discovery of new or improved avenues for therapy. Pre-clinical models in, e.g., laboratory rats, have been critical to advancing understanding but have been mostly limited to the examination of opioids delivered by parenteral injection, the insertion of subcutaneous pellets or pumps, or by oral dosing. Many humans use opioids by the inhalation route and indeed, one of the original public health crises related to non-medical use of opioids in the modern era was associated with the inhalation of vapor created by heating opium . The inhalation route might induce differences in the speed of brain entry, first pass metabolism, sequestration and release from non-brain tissues, etc. compared with these laboratory approaches. There is even some indication that inhalation use of heroin may come before a switch to injection use , perhaps due to perceptions of safety or cultural factors. This may be true with respect to, e.g., disease transmission associated with intravenous injection practices but may involve other risks. For example, inhalation heroin users who develop leukoencephalopathy may be at increased risk for mortality compared with those who inject heroin . Prior studies have shown that the inhalation of heroin produces effects in animal models. Lichtman and colleagues reported anti-nociceptive effects of volatilized heroin and otheropioids in mice using a heated glass pipe approach and a tail-withdrawal assay and Carrol and colleagues demonstrated that monkeys would self-administer volatilized heroin . There has, however, been no subsequent broad adoption of either of these techniques, possibly due to the difficulty of creating and maintaining what appear to be one-off apparatus/devices constructed by the respective lab groups. The recent advent of e-cigarette style devices presents the possibility of delivering a wide range of drugs other than nicotine, including opioids, via vapor inhalation. These devices are widely available , offer significant translational appeal and can be easily adapted to use with laboratory rodents using commonly available sealed-top housing chambers, standard laboratory house vacuum, and simple plumbing.

Harvesting Happiness: The Joy of Growing Your Own Cannabis

One debate in the HIV literature is the extent to which HIV disease is associated with accelerated aging as opposed to other comorbid conditions or other lifestyle factors associated with HIV. However, it is difficult to differentiate the effect of HIV itself versus the downstream effects of HIV or lifestyle factors associated with risk of contracting HIV . For example, many studies have documented higher risk of vascular risk factors including hyperlipidemia, type II diabetes, hypertension, and abdominal obesity in PWH likely due to the cardiometabolic side effects of ART, chronic immune activation, comorbid conditions that are more common in PWH , and increased risk of chronic stressors . Cysique & Brew propose that vascular cognitive impairment is implicated in the pathogenesis of neurocognitive impairment in PWH, particularly older PWH, given that cardiovascular and cerebrovascular conditions can cause alterations in the blood-brain barrier, altered vascular reactivity, and brain changes, particularly in white matter. A recent meta analysis by McIntosh et al., found that cardiovascular disease, particularly type II diabetes, hyperlipidemia, and current smoking, are associated with an increased risk of cognitive impairment in PWH. CVD has been associated with brain changes in PWH, but the majority find an association with abnormal white matter , which was not examined in the current study. Several vascular risk factors were examined as covariates and were not found to be significantly associated with cognitive outcomes; although it is important to note that this group is limited in that participants with more significant vascular comorbidities such as stroke or myocardial infarction were excluded for these analyses. Nevertheless,vertical grow equipment further exploration of vascular risk factors and how they are associated with cognition and brain aging in this cohort and PWH more broadly is of course warranted to further understand the effects of HIV versus the effects of comorbid conditions associated with HIV.

Comparing the results of the current study to the middle-aging literature is difficult. First, while brain changes due to AD pathology can begin in mid-life, it is still several years from midlife to when one would develop late-onset aMCI/AD; thus decades-long studies are needed to better understand brain changes in mid-life and how they relate to late-life AD. Therefore, the literature is sparse and generally relies on AD risk to examine memory and neuroimaging in mid-life. Second, many studies with an aging focus examine a memory composite and thus it is difficult to discern the association between delayed recall versus recognition and brain integrity from these studies. Even older-adult studies often do not specifically examine recognition memory as again they either examine aMCI diagnosis, which in the older adult literature does not necessarily imply recognition impairment, or a memory domain. Third, given the AD focus of middle-aging studies, many middle-aging studies focus on the medial temporal lobe and do not explore other regions such as the basal ganglia or the prefrontal cortex. From the sparse middle-aging research that examines both memory and neuroimaging, there is some indication that memory is associated with several neuroimaging correlates, most notably the medial temporal lobe. For example, the Wisconsin Registry for Alzheimer’s Prevention study, which focuses on adults aged 40-65 and is enriched with a family history of AD, has reported memory and neuroimaging associations. For example, in a study of 261 WRAP participants, those with subjective memory complaints had significant cortical thinning in the entorhinal, fusiform, posterior cingulate, and inferior parietal cortices and reduced amygdala volume compared to participants without subjective memory complaints. Subjective memory complaints were also associated with worse verbal memory . In 109 participants in the WRAP study, participants that were Aβ+, determined via PET imaging, exhibited significantly thinner entorhinal cortex, accelerated age-associated thinning of the parahippocampal gyrus, and performed worse across cognitive measures, although not significantly worse, compared with the Aβ− group . Approximately 65% of WRAP participants were female and approximately 95% of participants were non-Hispanic white. In a study of 210 adults aged 40-59 by Ritchie et al. , worse spatial recall and visual recognition as well as greater dementia risk were associated with lower brain and hippo campal volume. Overall, the middle-aging literature is quite limited, so it is difficult to discern if episodic memory, regardless of the type of memory , reliably associates with the medial temporal lobe in middle age.

While it is significant that these studies do find associations between memory, AD risk, and the medial temporal lobe as well as other brain structures, these studies do not report prefrontal involvement like that observed in the current study. Of note, the study participants in these two studies markedly differ from the CHARTER cohort; the CHARTER cohort was not enriched for family history of AD, is predominantly male , and this sub-sample is 50.5% African American/Black and 38.4% non-Hispanic White. However, Jak et al., , which examined men in their 50s , also found that MCI diagnosis was associated with smaller hippo campal volume, although only the hippocampus was examined in this study. One curious finding was that the post hoc analyses examining a sub-sample of participants showed that better delayed recall was associated with a thinner entorhinal cortex. The aim of these analyses were to examine and confirm that prior findings are applicable to participants that were ideally treated for HIV disease and did not have any current substance use that could confound results. Although it should be noted that full sample is already a group that somewhat differs from the general population of PWH in that this group excludes PWH with severe comorbid conditions, they have little to no current substance use, and are relatively well treated for HIV as compared to the general population . Nevertheless, this finding is opposite of what was hypothesized based on the literature. While thicker cortex has been associated with cognitive dysfunction in some settings, suggesting that it is the deviation from normal cortical thickness that is meaningful , within the HIV and AD literature this has not been observed. In HAND and AD , atrophy is consistently related to worse cognitive functioning. Therefore, it is likely that this is a spurious finding. Given that this is a relatively small sub-sample and may not be generalizable, and thus this finding should not be over interpreted. To further validate the specificity of memory and medial temporal lobe relationships and show that memory is not just related to overall brain integrity, processing speed and psychomotor skills were also examined in aim 1c. It was hypothesized that these two domains would be more associated with fronto-striatal structures implicated in HAND. In the entire sample, processing speed and psychomotor skills were not significantly associated with the medial temporal lobe as hypothesized. However, they were not significantly associated with prefrontal or basal ganglia structures either. Overall, these findings were not in line with the hypotheses,vertical grow factories and given that episodic memory was not associated with medial temporal lobes, these findings do not help to demonstrate that associations with the medial temporal lobe are specific to memory and not cognitive functioning in general.

In post hoc analyses, a thinner pars orbitalis was significantly associated with worse psychomotor functioning, which was somewhat in line with the hypotheses; however, the literature would suggest that we may expect psychomotor function to be more related to basal ganglia structures, particularly the putamen . Lastly, recognition and delayed recall and their association with the primary motor cortex were examined in aim 1d. It was hypothesized that memory would be at least less associated with the motor cortex given that the primary motor cortex is spared in AD. The sub aim was explored in order to complete double dissociation. Consistent with this hypothesis, episodic memory was not significantly associated with the prefrontal cortex. However, given that memory was not associated with the medial temporal lobe, this lack of an association is not meaningful and double dissociation was not supported. When examining covariates for this aim of the study, AIDS status and APOE e4 status were associated with worse delayed recall. Regarding AIDS status, nadir CD4 count has repeatedly been associated with risk of HAND both within the CHARTER cohort and in other cohorts around the world . While nadir CD4 count was examined as a potential covariate and was not found to be associated with delayed recall, AIDS status is of course associated with nadir CD4 given that an AIDS diagnosis is defined by either an opportunistic infection or if CD4 cell count drops below 200 cells per milliliter of blood at any point in one’s life . It is thought that greater immunosuppression is associated with CNS injury and those neurologic consequences may persist even after treatment with ART and immune recovery ; this highlights the importance of HIV identification and initiation of ART to avoid immunosuppression. Based on the estimated duration of HIV, most of these participants contracted HIV either before ART was available or in the era in which ART was not recommended to be initiated until after immunosuppression. This cohort on average is characterized by a history of immuno supression with immune recovery given that there is high rates of AIDS with evidence of immune recovery as evidenced by a median CD4 count of almost 500 and high rates of current ART use. Given that ART policies have changed and it is now recommended that ART is initiated immediately after diagnosis , continued research on aging with HIV will be needed to understand different cohort effects . Nadir CD4 has been associated with thinner cortex and smaller brain volumes throughout the brain, particularly in the parietal, temporal, and frontal lobes, and the hippocampus . Therefore, it is important to reiterate that episodic memory and prefrontal regions were significantly associated with one another even when accounting for AIDS status. Interestingly, one study found that low nadir CD4 was associated with reduced functional connectivity in the memory networks in APOE e4 carriers not but non-carriers . Regarding APOE status, several studies have examined the association between memory and APOE status in middle age, finding that APOE e4 carriers have similar memory and cognitive performance to non-carriers until the mid-to-late 50s when differences start to appear . Interestingly, the association between APOE e4 status and worse memory, specifically delayed recall but not recognition, was found within this group of PWH whose mean age was in the early 50s.

However, other early markers associated with preclinical AD, such as the association between memory and medial temporal lobe structures were not, although APOE by medial temporal lobe interactions were not explored. Previous HIV studies have shown mixed results when examining the association between APOE status and cognition within PWH. Within the larger CHARTER cohort, Morgan et al. found that APOE e4 status was not associated with a greater risk of HAND; however, this study was from an earlier time point in which participants were, on average, 44.1 years old. Moreover, in another CHARTER study by Cooley et al. in a sub-sample of CHARTER participants aged 50 and over, APOE e4 status was not associated with volumetric differences on MRI or MR spectroscopy metabolite analyses. However, these structural analyses may not have had the specificity to detect more minute differences in specific regions of the brain such as the medial temporal lobe. Nevertheless, the HIV literature is mixed as a review found that some HIV studies do find worse cognitive and brain integrity in PWH who are APOE e4 carriers whereas others do not . Several HIV studies, particularly in PWH over the age of 50, have found that APOE e4 status is associated with worse brain integrity in several regions including cerebral white matter, the thalamus, and temporal, parietal, and frontal regions. Additionally, one study comparing PWH to HIV-negative controls found APOE e4 carrier status to be beneficial in younger age, consistent with the well-documented antagonistic pleiotropy effect of APOE across the lifespan, but found that the negative effect of APOE e4 status in persons over the age of 50 was stronger in PWH compared to HIV-negative participants . Despite this one study, few studies have examined if there is a synergistic effect between APOE status and HIV status on cognition and brain integrity, although animal models do suggest possible mechanisms of a synergistic interaction between HIV and APOE status .

Raising the Bar: How Vertical Grow Systems are Transforming Cannabis Cultivation

The length of production runs was at least 2 ns. Capacitance was computed using Equation 5.1, where V = ∆Ψ is the voltage drop applied across the cell, and Q is the average absolute value of the charge stored on a single electrode. From the production run we also computed local properties of interest using an in-house software package developed for this study,such as the degree of confinement and the charge compensation per carbon , in order to understand the mechanisms of charge storage and gain physical insights into differences in capacitances between the materials. The definitions of these local properties are provided in the Supporting Information. The FAU_1 EDLC initially had an excess ionic charge of 25 e in the cathode, and an equal in magnitude, opposite in sign, excess charge of anions in the anode. After equilibration,the magnitude of excess ionic charge in each electrode is about 16 e. This represents a decrease of about 36% in charge stored in the electrode, since the electrode’s net charge always balances the excess charge of ions inside the electrode. This discharging shows that in the FAU_1 EDLC, the Poisson potential is not the potential we should apply to obtain the charge equivalent to the constant-charge simulation. To assess the actual potential we need to apply to obtain an average atom charge of ±0.01 e, we try several potential differences and follow the evolution of the total electrode charge as a function of time . We found that this “0.01 e–equivalent” voltage is approximately 1.6 V. A longer simulation would be needed to determine the exact voltage to higher precision. Moreover, we were surprised to see that even when the change in equilibrium charge is small, equilibration at constant potential from a “nearby” constant charge configuration still takes several nanoseconds. We believe this is due to two factors: First, the driving force for ions to diffuse in and out of the electrodes is proportional to how far away the EDLC is from equilibrium charge, leading to exponential convergence at constant potential. Second, the configuration of ions in the electrode is not only a function of the total charge in the electrode, but also depends sensitively on the distribution of this charge among the atoms of the electrode. Rearrangement of ions can take on the order of nanoseconds due to diffusive limitations in liquid that are exacerbated by the bulky size of the ions and the small pores of the ZTCs,vertical cannabis growing systems which at their narrowest points have diameters similar to the ion sizes.

The same type of investigation was done for a number of other ZTCs and gives similar results. After switching to a constant-potential simulation using the Poisson potential drop, the resulting equilibrium electrode charges differed from the initial constant charges by 50– 200% . Following this extensive study, we thus conclude that the approach of doing a constant-charge equilibration associated with a Poisson potential calculation is not suitable for the current work. For the constant-charge equilibration to still be interesting, we would need a better way to estimate the potential difference to apply in the constant-potential simulation. In an attempt to improve the determination of the potential difference corresponding to a particular amount of charge stored, we turned to the calculation of the three-dimensional electric potential field as described in Wang et al.We used the software package developed by the authors for constant potential simulations in a previous work,which implicitly computes the electrostatic potential at the position of each electrode atom in order to determine fluctuating charges. We adapted the code to the case of a constant-charge simulation in which the electrostatic potential of each electrode atom fluctuates. The electrostatic potential averaged over all the atoms in each electrode during a constant charge simulation was computed for 5 ZTC and 2 CDC materials and results are shown in Table 5.1. For almost all materials, the potential drop computed from the averaged local electrostatic potential appears to have no correlation with the Poisson potential one. Formaterials which had extremely low or negative capacitances, such as the ZTCs 221_2_6 and SAO, the average local potential drop was approximately 2 V. For FAU_1, where 1.06 V was too low to store a charge of ±0.01 e per atom, the local potential method computed a potential drop of 2.3 V for the ±0.01 e constant-charge simulation. This is too high compared to the “0.01 e–equivalent” potential of 1.6 V determined in Figure 5.3. It seems therefore that while the local electrostatic potential method does not, unlike the Poisson potential method, yield physically unrealistic values such as near-zero and negative potentials drops across the cell, the potential drops computed by the local potential method are still not accurate enough to be useful for significantly decreasing the time needed to converge the electrode charge in a constant-potential simulation. Our initial motivation for calculating the Poisson potential or averaged local potential from a constant-charge simulation was to reduce the time needed for charges on the electrodes to converge in a constant-potential simulation. However, we found that these proxy potentials were not representative of what the applied potential should be to obtain the same amount of stored charge, and the simulation times required for the charge convergence following the constant-charge equilibration were still more than a few nanoseconds. Moving forward, we thus opted to compute constant-potential properties such as capacitance and electrolyte configuration by skipping the constant-charge equilibration step, and directly applying a potential difference of 1 V to the EDLC.

The results of these simulations are detailed in the remainder of this article. In the constant-potential production runs, we excluded materials for which the EDLC simulation cell had more than 12 000 atoms. These larger cells could not be studied due to computational limitations affecting the memory-intensive constant potential method.This suggests that while particular geometric descriptors might be a useful indicator of capacitance within particular families of materials, a clear relationship between capacitance and, for example, pore size is not the rule, but rather the exception for materials which are otherwise geometrically similar. ZTCs, due to their well-defined templated structures, exhibit a diversity of topologies, pore geometries, and local curvatures, which are not well captured by traditional geometric descriptors, but are known to influence charge storage.Thus the insights we can glean from local interfacial properties in ZTCs might be better translated to microporous carbon materials in general. The bottom row of Figure 5.6 plots capacitance versus quantities related to the electrolyteelectrode interfacial configuration, which are computed for an ion in relation to the electrode atoms within its coordination shell: The charge separation is the average distance between the counterion and the carbons within its coordination shell. The degree of confinement is defined as the fraction of the maximum solid angle around a counterion which is occupied by carbon atoms within the coordination shell cutoff .And finally, the charge compensation per carbon , a quantity introduced in this work, is defined as the magnitude of the average charge per electrode atom in the coordination shell. A high CCpC indicates strong and localized charges in the electrode, as opposed to a weak or diffuse charge response. For all quantities, the angle brackets hi denote averaging over all counterions in an electrode. Of particular interest with regard to classical theories of capacitance, a positive correlation can be observed between the capacitance and A/h dsepi , reminiscent of Equation 5.2. This suggests that we can view capacitance in the ZTCs as arising from an “ideal” contribution from a reference electrode with the same A/h dsepi , and a “non-ideal” contribution responsible for the deviations from classical double layer theory, arising from the microporosity. One measure of how micropores influence charge storage is the DoC. Here, we note that we are plotting in Figure 5.6 the average degree of confinement, h DoCi , which obscures differences in the range and distribution of confinement values within a material. We do not observe a strong correlation with capacitance when h DoCi is below 0.25,vertical cannabis grows and when h DoCi is above 0.25 the capacitance seems to be slightly negatively correlated with confinement. This finding adds nuance to the conclusions from previous studies that more confinement is generally a positive influence on charge storage efficiency.We discuss confinement effects further in a later section, where we examine charge storage mechanisms in individual pores.

Finally, the local descriptor which appears to have the best correlation with capacitance is h CCpCi , for which we observe a positive and nearly linear relationship with even less scatter than for A/h dsepi . Capacitance and h CCpCi both aggregate information about the charge stored by the electrode atoms, however their strong correlation is not trivial because only about 30–45% of the electrode atoms are within the coordination shell of a counter-ion at a given time step. These coordination shell carbons have a slightly larger-than-proportional share of charge, carrying between 35 and 50% of the net charge in the electrode . Perhaps surprisingly, the capacitance does not correlate with the total charge compensation within the coordination shell . The observation that per-carbon charge compensation correlates so strongly with the capacitance indicates that localized charge distributions within the electrode store charge more efficiently than disperse charge distributions, as they use less electrode “real estate” to counterbalance an ionic charge. One complication with comparing materials using local properties is that they are computed with a definition of coordination shell that uses a cut-off radius, rcut around the ion. rcut radius was chosen following the literature as the first minimum in the ion-carbon RDF. However, we found in our materials that the first minima were not all at the same location in all materials, and some of them did not have a clear “minimum” at all. Therefore, we opted to use the same rcut of 6.3 Å for all materials, as this was the location of most of the RDF first minima and also was consistent with the literature. Further work is needed to determine how to better define a coordination shell and compute local interfacial properties. However, since we were able to observe quite a strong correlation of capacitance with h CCpCi with the existing coordination shell definition, we leave this complication for a future study. Having investigated geometric descriptors and local interfacial properties of EDLCs, averaged over the entire electrode, we find that almost all of them other than h CCpCi lack a clear correlation with capacitance or, in the case of A/h dsepi , are correlated but exhibit significant scatter. In the following sections we turn our attention to the relationship between pore geometry, local electrolyte properties, and charge storage within individual pores of selected materials. We then move toward a more general framework for rationalizing differences in capacitance among ZTC materials. Due to the structural diversity of ZTC frameworks, we believe insights drawn from ZTCs are also relevant general design rules for porous carbon EDLC electrodes. We begin our examination of individual materials by considering BEA and BEA_beta, which are templated on different polymorphs of the same zeolite as shown in Figure 5.7a. Naturally occuring zeolite beta consists of a mixture of polymorphs A and B, both of which contain layers of the same tertiary building unit which are rotated ±90with respect to each other. In polymorph A , the layers are stacked in a chiral fashion, while in polymorph B , the rotation of the layers alternates. As a result the pore size distributions of BEA and BEA_beta differ, with slightly larger pore sizes for BEA_beta as shown in Figure 5.7b. The capacitances of these ZTCs differ widely, with 34.0 F g1 gravimetric and 2.33 µF cm2 areal capacitances computed for BEA . The ions within the pores also have different degrees of confinement, possibly arising from the slight differences in the most probable pore sizes. As seen in Figure 5.7c, the anions in the anode of BEA_beta have a single peak in their DoC histogram around 0.33, while the anions in the BEA have on average higher DoCs, with one peak at 0.35 and another at 0.42. We might suppose from this that BEA should have the higher charge storage efficiency, since Merlet et al. showed that highly confined ions are able to store more charge in super capacitors,however, in this case the opposite is true: h CCpCi DoC is higher in BEA_beta than in BEA for all DoC values . In the cathode, as well, the average charge compensation is lower for BEA than for BEA_beta . One noteworthy feature in the charge compensation distribution of the BEA anode is a minima in h CCpCi DoC at 0.43 DoC , the location of the higher peak in the DoC histogram.

Cultivating Green Gold: The Art and Science of Cannabis Growth

Future work must also continue focusing on improving functioning and symptom reduction via more comprehensive and multi-modal wrap-around services, and including empirically supported treatments for schizophrenia such as psychosocial therapy and mindfulness interventions. Given observed progressive declines in global cognitive function in patient with schizophrenia over time, increased participation in cognitive remediation training programs and/or cognitive control programs may be additionally useful. Lastly, researchers and clinicians alike should aim to reduce the gap between their respective fields in order to facilitate widespread utility of CHR classification and intervention. This likely begins with addressing classification discrepancies and refining clinical/research tools as needed; specifically, whether it is more efficacious to define psychosis from a dichotomous or continuous perspective. The adoption of low-cost screening methods may also prove fruitful here. In conclusion, many important findings in CHR research have emerged over the past year, particularly in the domain of clinical functioning. This field continues to progress in its attempts to clarify both clinical and biological markers of psychosis risk, and has begun to offer important insight into interventions for reducing the likelihood of psychosis emergence. Although more work is necessary to elucidate and expand the current literature, we have started gaining traction on utilizing research findings to reach a point of meaningful intervention and prevention of psychosis. Cigarette smoking causes and exacerbates chronic obstructive pulmonary disease and asthma,and is associated with wheezing and cough in populations without a respiratory diagnosis.Quitting cigarettes improves respiratory symptoms and limits lung function deterioration.While the relationship between cigarette smoking and respiratory symptoms is well-established, the relationship between use of other tobacco products besides cigarettes and respiratory symptoms in adults is less clear. Changes in the tobacco market, in part,mobile vertical growing racks reflect efforts to market products that may cause less harm than cigarettes. Electronic nicotine delivery devices may represent such a product. With respect to respiratory symptoms, findings have been mixed, however.

Numerous animal and in vitro studies raise theoretical concerns about e-cigarette use and lung disease.Short term human experimental studies have linked adult e-cigarette use with wheezing and acute alterations in lung function,and lower forced expiratory flow.One longer term 12-week prospective study of cigarette smokers switching to e-cigarettes found no effects on lung function,and two 1-year randomized controlled clinical trials found reduced cough and improved lung function in persons who used e-cigarettes to reduce or quit cigarettes.Cross-sectional observational studies using Waves 2 and 3 data from the Population Assessment of Tobacco and Health Study have found an association between e-cigarette use and respiratory symptoms. One longitudinal W3-W4 PATH Study analysis found no relation between exclusive e-cigarette use and incident respiratory symptoms but suggested that dual users of cigarettes and e-cigarettes had significantly higher risk for symptom onset compared to exclusive cigarette users.Finally, one prospective study of young adults found an association between cannabis vaping and respiratory symptoms.There are many design issues that make these studies hard to compare. The clinical importance of the respiratory outcome is not clear in most cases because the multiple wheezing questions are analyzed in isolation from each other, or an endorsement of only one item is considered symptomatic. Many of the studies included adults with COPD, which is a diagnosis strongly linked to a history of cigarette smoking, and many people with COPD have chronic severe wheezing and dyspnea. Another concern is residual confounding: Most of the studies showing an association between e-cigarette use and respiratory symptoms failed to adjust for cigarette smoking history and concurrent marijuana use, both associated with respiratory problems and concurrent e-cigarette use. Finally, few studies addressed alternative tobacco product categories besides e-cigarettes. To better understand these divergent findings on how tobacco product use relates to respiratory health, we analyzed W2 and W3 data from the PATH Study.We developed a dependent variable that incorporated all available questions on wheezing and nighttime cough and determined cut-off values associated with functional outcomes. We focused on both cross sectional and longitudinal associations between functionally-important respiratory symptoms and ten mutually exclusive tobacco product use categories, adjusting for past cigarette smoking history and concurrent marijuana use. We also examined results for two different cut-off values for a respiratory symptom index to test for sensitivity to symptom severity. Covariates were derived from W1 and W2, and included variables associated with both tobacco exposure and functionally-important respiratory symptoms.

Low socioeconomic status is associated with tobacco use and poorer lung function.Sociodemographic variables included age, sex, race/ethnicity, education, income, and urbanicity. Medical conditions that could result from tobacco use and also cause respiratory symptoms included asthma, congestive heart failure, heart attack, diabetes, cancer, being overweight, and use of anti-hypertensives known to cause coughing or wheezing . Smoke-related exposures included pack-years of cigarette smoking, second-hand smoke exposure, and marijuana use. Calculating pack years of smoking We were particularly concerned with adjusting results carefully for each individual’s cigarette smoking history, an important predictor of respiratory outcomes. We derived lifetime pack years to account for cigarette smoking history in this analysis. Lifetime pack years is a clinical metric calculated by multiplying the number of packs of cigarettes per day someone smokes by the number of years they have smoked cigarettes. The following text annotates the algorithm to calculate Wave 1 lifetime pack years. Data from Wave 1 lifetime pack years was used in conjunction with variables describing subsequent cigarette use to determine lifetime pack years at W2 and beyond. Never smokers were assigned a pack years value of zero. All questions used in the algorithm and response categories are listed in Supplemental Table 3. Because of routing instructions in the PATH Study interview, only those respondents who said that they have smoked cigarettes “fairly regularly” were asked about how long they have smoked or did smoke . For any respondent at Wave 1 who currently smokes regularly or formerly smoked fairly regularly, lifetime pack years was calculated by multiplying the number of cigarette packs smoked per day by the number of years they have smoked fairly regularly. Two different formulas were used for this calculation, depending onanswers to the questions for variable R01_AC9004 and R01_AC9009 . At W2, the prevalence of functionally-important respiratory symptoms was 7.2% . Table 1 shows that respiratory symptoms were more common in the four categories of tobacco use that included cigarettes , compared to never tobacco use, and among those who used marijuana. Functionally-important respiratory symptoms were much more common among those with asthma, and also more common among those with comorbid conditions, obesity, and those using medications known to cause coughing or wheezing . Figure 1 illustrates the unadjusted linear relationship between frequency of cigarette use and proportion of persons with functionally-important respiratory symptoms for the four use categories featuring cigarettes.

The shape of the dose-response lowess lines were almost identical and the 95th percentile for cigarette use intensity was essentially the same for all four groups, regardless of what other tobacco products were added to cigarettes, emphasizing the importance of cigarettes in these four most prevalent categories of tobacco use. In the full, adjusted, multi-variable cross-sectional model , all four tobacco use categories that featured cigarette smoking were associated with a doubling of the risk of functionally-important respiratory symptoms vs. never tobacco users ,mobile vertical system grow and risk for the multiple use categories were not significantly different from exclusive cigarette use . As illustrated in Figure 2, we observed a significant positive dose-response relationship for current use of cigarettes . Compared to never users, the risk of functionally-important respiratory symptoms were not significantly different for exclusive users of e-cigarette, cigar, hookah and smokeless tobacco; moreover post hoc testing indicated that risk ratios for each of these categories were significantly lower compared to exclusive cigarette use . None of these cross-sectional results changed when the analysis was repeated at a respiratory index cut-off level of ≥2. Testing sensitivity to key confounders of the e-cigarette—respiratory symptom association Cigarette smoking pack-years, second-hand smoke exposure, and marijuana use were also associated with functionally-important respiratory symptoms . Table 2 highlights the importance of cigarette smoking pack-years and past-month marijuana use as confounders of the association between tobacco product use and respiratory symptoms. Cigarette pack-years was a particularly strong confounder; adding this variable alone to the cross-sectional multi-variable model attenuated association estimates for cigarettes and cigarettes+e-cigarettes by 30% and for exclusive e-cigarettes by 25%. That was partly because all three groups had a similarly long cigarette smoking history—weighted mean 13.4 cigarette pack-years for exclusive cigarette smokers, 12.9 for the dual users, and 10.8 for exclusive e-cigarette users. Similarly, 19.2% of exclusive e-cigarette users also currently used marijuana; adding P30D marijuana use to the multi-variable model attenuated association for e-cigarettes by 9%. Adding all three confounders together attenuated the e-cigarette-respiratory symptom association RR from 1.53 to 1.05. The categorical analysis did not address whether functionally-important respiratory symptoms increased with increasing frequency of use. Figure 2 explored this for cigarettes and e-cigarettes, adjusting for cigarette smoking history. For cigarettes, there was a significant linear increase in the percent with functionally-important respiratory symptoms with higher intensity of use; prevalence of respiratory symptoms was less than 5% for never users and over 30% for those smoking a pack a day or more. There was also an increase in respiratory symptoms with higher intensity of e-cigarette use, but the trend did not reach statistical significance .There were no statistically significant associations between exclusive use of cigars, smokeless tobacco or hookah and worsening of respiratory symptoms compared to never users.

Post hoc testing indicated that risk ratios were significantly smaller than for exclusive use of cigarettes, regardless of cutoff level for the respiratory symptom outcome . In contrast, findings for exclusive e-cigarette use were sensitive to symptom severity, showing a significant association with worsening symptoms at a threshold of ≥2 , but not at a symptom threshold of ≥3 . This study underscores the adverse consequences of continued cigarette smoking among people without COPD or other non-asthma respiratory disease on functionally-important respiratory symptoms. Consistent with other studies,a longer history of cigarette smoking predicted worsening respiratory symptoms and decreased chances of improvement, independent of P30D cigarette smoking, underlining the importance of cigarette smoke exposure in the development or worsening of respiratory symptoms. The consequences of cigarette use were the same regardless of which additional tobacco products were used. As shown previously, dual users of cigarettes and e-cigarettes smoked cigarettes as frequently as exclusive cigarette smokers,their respiratory response to cigarette smoking intensity was essentially the same as exclusive cigarette users, and they had indistinguishable risk for symptom worsening.We found no evidence to support the idea that dual use of cigarettes and e-cigarettes carries higher risk for respiratory symptom worsening compared to exclusive cigarettes for the symptom outcomes we examined. This contrasts with increased risk of dual use in the analyses of PATH Study data reported by Reddy et al,an analysis that involved a different period , and adjusted only for demographics; we doubt the finding reported by Reddy would have remained statistically significant after adjustment for the multiple confounders included in the present analysis. In contrast, respiratory symptom risk for exclusive users of other tobacco products was significantly lower than for cigarettes, and was largely not significantly different from never or former tobacco users. The finding for e-cigarettes contradicts two cross-sectional studies of tobacco use and respiratory symptoms, one using PATH Study W2 data18 and one using W3 data,both concluding that there was an association between e-cigarette use and wheezing. These studies examined the association with each item on the respiratory index and neither adjusted for cigarette smoking history or marijuana use.34 Based on the present study findings— lack of a crude dose-response for e-cigarette frequency illustrated in Figure 2 and the confounding analysis in Table 2—we conclude that the reported associations in these papers were likely spurious, primarily because of the failure to adjust for cigarette smoking history. Our supplemental materials include a method for determining cigarette pack-years from PATH Study data to support the inclusion of this important confounder by other users of these data. The longitudinal results seem contradictory if the reference of focus is never users—ecigarette users are significantly more likely to have symptoms worsen at one cut-off level and significantly more likely to have symptoms improve at another—an example of how results for ecigarette users may be sensitive to how health outcomes are determined. But another viewpoint is that potentially reduced harm tobacco products are judged also by how health risks of the product compare to the health risks for cigarette smokers.