The aqueous layer was separated and extracted three times with methylene chloride

This approach is also particularly relevant when studying the provisions of a single umbrella policy. For example, for provisions of recreational cannabis legalization, exposure categories based on the overall approach to legalization in 1 state versus another may be of greater interest than the effects of individual provisions. Similarly, Erickson et al. categorized states into 4 groups on the basis of stringency of the overall alcohol policy environment and found that these categories were associated with levels of past-month alcohol consumption. Several options are available to define clusters, including manual selection, hierarchical cluster analysis, latent class analysis, and principal components analysis . Heatmaps like those presented here can help inform the selection of appropriate clusters by offering an intuitive visual reference for the likelihood that sets of policies were adopted together. Evaluating situations when each clustering approach might be preferable is a future research direction.With the rapid growth of planetary scale web services, the past few years have seen the consolidation of data center facilities at a scale never seen before. Companies like Google, Amazon and Microsoft are building huge data centers comprising several thousands of servers. Economies of scale and advances in virtualization have favored the consolidation of data center facilities resulting in the emergence of mega data centers containing hundreds of thousands of servers with billions of dollars of investment. The emergence of mega data centers came with an accompanying trend in the basic building block of data centers. The cookie cutter approach to building data centers in the early half of this decade involved using racks with 20 to 40 servers and a top of rack switch as the basic building block. But the consolidation of data center facilities and increase in size of data centers has also led to a shift in the basic building block from a rack to a modular shipping container with anywhere from 250 to 1000 servers. These self-contained shipping containers also known as pods not only include servers, but are also geared with networking, power and cooling equipment.

At the scale of a pod,hydroponic trays it is possible to build non-blocking switch fabrics to interconnect all these servers. Interconnecting multiple modular data centers or pods to construct a mega data center requires careful design of the core interconnection network. Providing the required bisection bandwidth between pods is often expensive. The traditional technique of interconnecting pods involves using a few core packet switches such as the Cisco Catalyst 6509 [Cis] and connecting all the pod switches to the core switches. As the data centers grow in size, the problem of providing sufficient inter pod bandwidth has become more and more challenging. A key driving factor that has led to consolidation of resources is the economies of scale afforded by this consolidation and the increased flexibility of placing computation and services across a large cluster. But the flexibility in placing services or virtual machines has to also be supported by sufficient bandwidth between nodes that require it. For example a large scale service such as a search engine might run on thousands of servers spread across multiple pods requiring significant inter-pod bandwidth. But this set of nodes might change over time, or the number of nodes used to serve the users might change due to increased popularity of the service. In general, it is expected that the communication patterns and bandwidth demands between different sets of nodes changes over time. With the traditional data center network architectures, the only way to provision a network with sufficient bandwidth between any set of nodes that changes over time is to build a complete non-blocking network for the entire data center with electrical packet switches. One might claim that provisioning a fully non-blocking network between all the nodes is an overkill since only some nodes would actually require significant bandwidth resources. However, this fully provisioned topology is required to support full bisection bandwidth even between a pair of pods unless the set of pods that require high bisection bandwidth never changes over time.

For example, consider a traditional network with pods containing 1000 servers each connected by 10 Gb/s links and the pods are all connected through a core layer with an over subscription ratio of 2. Now even if only 2 pods had a full bisection bandwidth requirement and the other pods require only a small amount of bandwidth, this network cannot support the full bisection bandwidth required. This makes it essential to construct fully provisioned networks even to support localized bursts that require high bandwidth as long as these sets of nodes that require high bandwidth is not fixed. Two promising technologies to provision bandwidth more flexibly in the data center include optical circuit switching and wavelength division multiplexing . Optical circuit switches are oblivious to bandwidth and a single optical port can carry several parallel channels of 10 Gb/s using WDM. If a lot of bandwidth is required from a particular source to a particular destination, optical circuits offer a cost effective way of provisioning this bandwidth. A key limitation is switching time – it can take tens of milliseconds to switch from one destination to another. So if the bandwidth demand fluctuates very rapidly, then optical switches are not very useful. Optical circuit switching has been used in the telecom industry for a long time for provisioning long haul links where the capacity is typically provisioned or changed once in a few hours if not days. In the data center, using optical circuit switching at the level of individual hosts is infeasible since hosts speak to several other hosts at short timescales of a few milliseconds or seconds. But modular data centers provide a good opportunity to leverage optical switching since the bandwidth demand when aggregated at the level of pods is relatively more stable. It is still likely that there will be some bursty communication from each pod which would be best served by electrical packets switches which can switch bursty traffic even at a nanosecond scale. Optics already form a relatively large fraction of the data center network cost.

The inter-pod network typically uses 10 GigE or faster links that span long distances of few tens of metres and require the use of optic fibres since 10 GigE over copper is only feasible over short distances of around 10 m. The use of optical fibers requires expensive SFP+ tranceivers each of which costs $200 or more. We propose Helios [FPR+10], a hybrid electrical/optical data center network architecture that unifies the benefits of electrical and optical switching to provide nearly the same performance of a traditional electrically switched network but at a much lower cost, power consumption and complexity . Helios uses packet switches and circuit switches to interconnect pods and dynamically forwards traffic over them based on the nature of traffic and provisions circuits between pods that currently require bandwidth. We have built a fully functional prototype of Helios using commercially available networking equipment and by implementing software to perform various tasks required for Helios. The main contribution of this research is the proposed design for combining optical circuit switching and electrical packet switching in a data center for a more efficient network design. We also present a technique for estimating the natural interpod bandwidth demand by ignoring any bottlenecks caused by current network conditions. We identify key challenges in building a large scale deployment like Helios and describe several research opportunities that can advance our ability to build more efficient large scale data center network designs. Our prototype illustrates the feasibility of the design and indicates the opportunity to get large benefits in cost and power by using a hybrid network architecture for interconnecting pods. Helios uses a combination of electrical packet switches and optical circuit switches at the core layer to interconnect pods. Figure 2.1 illustrates a Helios network. If there are N1 electrical core packet switches, then N1 up link ports from each pod switch connect to these N1 core packet switches. The remaining N2 up links ports from each pod switch are connected to core circuit switches. The relative fraction of packet switches and circuit switches in the core layer depends on the extent of stability in communication patterns. If the traffic pattern is very stable, most of the core switches in the network can be optical circuit switches. Intuitively,seedling starter trays this allows the circuit switching overhead to be amortized over a long period of high utilization of the circuit that is just setup. The servers in each pod are connected to the pod switch by copper links which are feasible over short distances. Due to the relatively large distances involved, the links between the pod switches and the core layer are optical links. Each up link port in the pod switches contains an optical transceiver. The links from the pod switches to the core packet switches require an optical transceiver at the core packet switch end as well. The up links which connect to the optical core circuit switches do not require transceivers in the core layer and terminate at the switch directly as it operates entirely in the optical domain. The up links that are connected to optical circuit switches can also make use of wave division multiplexing .

Suppose we use a WDM factor of w, then w up links would be combined through a passive multiplexer/demultip lexer module into a single super link that is connected to a single core circuit switch port. The different up link ports that use a single super link use transceivers of different wavelengths to allow the multiplexer/demultiplexer module to work correctly. Eventually when a circuit is setup through this core circuit switch, this particular WDM super link would be connected to another pod thereby establishing w links from the source pod to the destination pod. Essentially higher values of w allow more data to pass through a single fiber or optical circuit switch thereby reducing the number of optical circuit switches required in the topology.The software for Helios consists of 3 primary components – Pod Switch Manager, Circuit Switch Manager and the Topology Manager. All these components act in a coordinated fashion to provision bandwidth resources where they are required, when they are required. The interactions between these components is illustrated in Figure 2.2. Besides these 3 components, the core packet switches are just traditional switches that can act in a plug-and-play fashion with simple software similar to learning switches. They do not require any dynamic or specialized configuration for Helios. They can be preconfigured with the MAC addresses or IP prefixes for different pods, but this is not required.TLC was performed using Merck 60 F254 aluminum-backed plates. Flash column chromatography was performed using Silicycle silica gel . Melting points were determined using an automated Buchi B-545 melting point apparatus, which provides a specific melting point, not a range, and are corrected. 1H NMR spectra were obtained on a Bruker Avance spectrometer. 13C NMR spectra were obtained on Bruker Avance NEO and Bruker Avance spectrometers. Chemical shifts are referenced to the residual solvent signal . Infrared spectra were recorded on a Bruker Alpha spectrometer. High-resolution mass spectra were obtained using an Agilent 6545 LC/SFC Hybrid Q-TOF spectrometer. Optical rotations were taken on a Rudolph AutoPol IV polarimeter. Circular dichroism experiments were performed on a Jasco J-815 CD spectrometer. Using the procedure of Zhang,60% NaH was portionwise added to a stirred solution of 3-formylindole in tetrahydrofuran cooled in an ice bath and then the reaction was slowly warmed to room temperature. After stirring at room temperature for 30 min, phenylsulfonyl chloride was added drop wise. The reaction was stirred for 24 h at room temperature. The resulting heterogenous mixture was concentrated under reduced pressure into a crude solid. The solid was dissolved in a mixture of water and methylene chloride.The combined organic layer was dried with sodium sulfate and concentrated in vacuo. The resulting solid was dissolved in minimum amount of hot methylene chloride/hexanes mixture and allowed to cool slowly to room temperature to afford off-white crystals . The spectroscopic data of the product agreed with the reported literature.Using the procedure of Zhang,60% NaH was portion wise added to a stirred solution of 3-acetylindole in tetrahydrofuran cooled in an ice bath and then the reaction was slowly warmed to room temperature. After stirring at room temperature for 30 min, phenylsulfonyl chloride was added drop wise. The reaction was stirred for 72 h at room temperature.