Tag Archives: grow vertical

Mechanical design and compliance have also been used to reduce the effects of variability and uncertainty

The large majority of robotic sensing applications involve proximal remote sensing, i.e., non-contact measurements – from distances that range from millimeters to a few tens of meters away from the target – of electromagnetic energy reflected, transmitted or emitted from plant or soil material; sonic energy reflected from plants; or chemical composition of volatile molecules in gases emitted from plants. Proximal remote sensing can be performed from unmanned ground vehicles or low-altitude flying unmanned aerial vehicles ; sensor networks can also be used . Current technology offers a plethora of sensors and methods that can be used to assess crop and environmental biophysical and biochemical properties, at increasing spatial and temporal resolutions. Imaging sensors that cover the visible, near-infrared , and shortwave infrared spectral regions are very common. A comprehensive review of non-proximal and proximal electromagnetic remote sensing for precision agriculture was given in . Proximal remote sensing technologies for crop production are reviewed in ; plant disease sensing is reviewed in detail in ; weed sensing is covered in , and pest/invertebrates sensing in . One type of sensing involves acquiring an image of the crop, removing background and non-crop pixels , and estimating the per-pixel biophysical variables of interest, or performing species classification for weeding applications. Estimation is commonly done through various types of regression . For example, during a training phase,grow vertical images of leaf samples from differently irrigated plants would be recorded, and appropriate spectral features or indices would be regressed against the known leaf water contents. The trained model would be evaluated and later used to estimate leaf water content from spectral images of the same crop.

Pixel-level plant species classification is done by extracting spectral features or appropriate spectral indices and training classifiers . In other cases, estimation of some properties – in particular those related to shape – is possible directly from images at appropriate spectra, using established image processing and computer vision techniques, or from 3D point clouds acquired by laser scanners or 3D cameras. Examples of such properties include the number of fruits in parts of a tree canopy , tree traits related to trunk and branch geometries and structure , phenotyping , shape-based weed detection and classification , and plant disease symptom identification from leaf and stem images in the visible range . Crop sensing is essential for plant phenotyping during breeding, and for precision farming applications in crop production. Next, the main challenges that are common to crop sensing tasks in different applications are presented, and potential contributions of robotic technologies are discussed.A major challenge is to estimate crop and environment properties – including plant detection and species classification – with accuracy and precision that are adequate for confident crop management actions. Wide variations in environmental conditions affect the quality of measurements taken in the field. For example, leaf spectral reflectance is affected by ambient light and relative angle of measurement. Additionally, the biological variability of plant responses to the environment can result in the same cause producing a wide range of measured responses on different plants. This makes it difficult to estimate consistently and reliably crop and biotic environment properties from sensor data. The responses are also often nonlinear and may change with time/plant growth stage. Finally, multiple causes/stresses can contribute toward a certain response , making it impossible for an ‘inverse’ model to map sensor data to a single stress source. Agricultural robots offer the possibility of automated data collection with a suite of complementary sensing modalities, concurrently, from large numbers of plants, at many different locations, under widely ranging environmental conditions.

Large amounts of such data can enhance our ability to calibrate regression models or train classification algorithms, in particular deep learning networks, which are increasingly being used in the agricultural domain and require large training data sets . Examples of this capability is the use of deep networks for flower and fruit detection in tree canopies, and the “See and Spray” system that uses deep learning to identify and kill weeds . Data from robots from different growers could be shared and aggregated too, although issues of data ownership and transmission over limited bandwidth need to be resolved. The creation of large, open-access benchmark data sets can accelerate progress in this area. Furthermore, sensors on robots can be calibrated regularly, something which is important for high-quality, reliable data. Other ways to reduce uncertainty is for robots to use complementary sensors to measure the same crop property of interest, and fuse measurements , or to measure from different viewpoints. For example, theoretical work shows that if a fruit can be detected in n independent images, the uncertainty in its position in the canopy decreases with n. Multiple sensing modalities can also help disambiguate between alternative interpretations of the data or discover multiple causes for them. New sensor technologies, such as Multi-spectral terrestrial laser scanning which measures target geometry and reflectance simultaneously at several wavelengths can also be utilized in the future by robots to assess crop health and structure simultaneously.Another major challenge is to sense all plant parts necessary for the application at hand, given limitations in crop visibility. Complicated plant structures with mutually visually occluding parts make it difficult to acquire enough data to reliably and accurately assess crop properties , recover 3D canopy structure for plant phenotyping or detect and count flowers and fruits for yield prediction and harvesting, respectively. This is compounded by our desire/need for high-throughput sensing which restricts the amount of time available to ‘scan’ plants with sensors moving to multiple viewpoints. Robot teams can be used to distribute the sensing load and provide multiple independent views of the crops. For example, fruit visibility for citrus trees has been reported to lie in the range between 40% and 70% depending on the tree and viewpoint , but rose to 91% when combining visible fruit from multiple perspective images .

A complementary approach is to utilize biology and horticultural practices such as tree training or leaf thinning, to simplify canopy structures and improve visibility. For example, when V-trellised apple trees were meticulously pruned and thinned to eliminate any occlusions for the remaining fruits, 100% visibility was achieved for a total of 193 apples in 54 images, and 78% at the tree bottom with an average of 92% was reported in . Another practical challenge relates to the large volume of data generated by sensors, and especially high-resolution imaging sensors. Fast and cheap storage of these data onboard their robotic carriers is challenging, as is wireless data transmission, when it is required. Application-specific data reduction can help ease this problem. The necessary compute power to process the data can also be very significant, especially if real-time sensor based operation is desired. It is often possible to collect field data in a first step, process the data off-line to create maps of the properties of interest , and apply appropriate inputs in a second step. However, inaccuracies in vehicle positioning during steps one and two, combined with increased fuel and other operation costs and limited operational time windows often necessitate an “on-the-go” approach,vertical grow systems where the robot measures crop properties and takes appropriate action on-line, in a single step. Examples include variable rate precision spraying, selective weeding, and fertilizer spreading. Again, teams of robots could be used to implement on-the go applications, where slower moving speeds are compensated by team size and operation over extended time windows.Interaction via mass delivery is performed primarily through deposition of chemical sprays and precision application of liquid or solid nutrients . Delivered energy can be radiative or mechanical, through actions such as impacting, shearing, cutting, pushing/pulling. In some cases the delivered energy results in removal of mass . Example applications include mechanical destruction of weeds, tree pruning, cane tying, flower/leaf/fruit removal for thinning or sampling, fruit and vegetable picking. Some applications involve delivery of both material and energy. Examples include blowing air to remove flowers for thinning, or bugs for pest management ; killing weeds with steam or sand blown in air streams or flame ; and robotic pollination, where a soft brush is used to apply pollen on flowers . Physical interaction with the crop environment includes tillage and soil sampling operations , and for some horticultural crops it may include using robotic actuation to carry plant or crop containers , manipulate canopy support structures or irrigation infrastructure . In general, applications that require physical contact/manipulation with sensitive plant components and tissue that must not be damaged have not advanced as much as applications that rely on mass or energy delivery without contact. The main reasons are that robotic manipulation which is already hard in other domains can be even harder in agricultural applications, because it must be performed fast and carefully, because living tissues can be easily damaged. Manipulation for fruit picking have received a lot of attention because of the economic importance of the operation .

Fruits can be picked by cutting their stems with a cutting device; pulling; rotation/twisting; or combined pulling and twisting. Clearly, the more complicated the detachment motion is, the more time-consuming it will be, but in many cases a higher picking efficiency can be achieved because of fruit damage reduction during detachment. Fruit damage from bruises, scratches, cuts, or punctures results in decreased quality and shelf life. Thus, fruit harvesting manipulators must avoid excessive forces or pressure, inappropriate stem separation or accidental contact with other objects .Contact-based crop manipulation systems typically involve one or more robot arms, each equipped with an end-effector. Fruit harvesting is the biggest application domain , although manipulation systems have been used for operations such as de-leafing , taking leaf samples , stomping weeds , and measuring stalk strength . Arms are often custom designed and fabricated to match the task; commercial, off-the-shelf robot arms are also used, especially when emphasis is given on prototyping. Various arm types have been used, including cartesian, SCARA, articulated, cylindrical, spherical and parallel/delta designs. Most reported applications use open-loop control to bring the end-effector to its target . That is, the position of the target is estimated in the robot frame using sensors and the actuator/arm moves to that position using position control. Closed-loop visual servoing has also been used to guide a weeding robot’s or fruit-picking robot’s end effector. End-effectors for fruit picking have received a lot of attention and all the main fruit detachment mechanisms have been tried .For example, properly-sized vacuum grippers can pick/suck fruits of various sizes without having to center exactly the end-effector in front of the targeted fruit . Also, a large variety of grippers for soft, irregular objects like fruits and vegetables have been developed using approaches that include from air , contact and rheological change . Once a fruit is picked, it must be transported to a bin. Two main approaches have been developed for fruit conveyance. One is applicable only to suction grippers and spherical fruits, and uses a vacuum tube connected to the end-effector to transport the picked fruit to the bin . In this case there is no delay because of conveyance, as the arm can move to the next fruit without waiting. However, the vacuum tube system must be carefully designed so that fruits don’t get bruised during transport. The other approach is to move the grasped fruit to some “home” location where it can be released to a conveyance system or directly to the bin. This increases transport time, which may hurt throughput. Clearly, there are several design and engineering challenges involved with this step.Combining high throughput with very high efficiency is a major challenge for physical interaction with crops in a selective, targeted manner; examples of such selective interactions are killing weeds or picking fruits or vegetables. For example, reported fruit picking efficiency in literature for single-arm robots harvesting apple or citrus trees ranges between 50% to 84%; pick cycle time ranges from 3 to 14.3s . However, one worker on an orchard platform can easily maintain a picking speed of approximately 1 apple per 1.5 seconds with efficiency greater than 95% . Hence, replacing ten pickers with one machine would require building a 10-40 faster robotic harvester that picks gently enough to harvest 95% of the fruit successfully, without damage, and do so at a reasonable cost!