• Aucun résultat trouvé

Automatic Targeting of Plant Cells via Cell Segmentation and Robust Scene-Adaptive Tracking

N/A
N/A
Protected

Academic year: 2021

Partager "Automatic Targeting of Plant Cells via Cell Segmentation and Robust Scene-Adaptive Tracking"

Copied!
8
0
0

Texte intégral

(1)Automatic Targeting of Plant Cells via Cell Segmentation and Robust Scene-Adaptive Tracking. The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters.. Citation. Paranawithana, Ishara et al. "Automatic Targeting of Plant Cells via Cell Segmentation and Robust Scene-Adaptive Tracking." IEEE International Conference on Robotics and Automation, May 2019, Montreal, Canada, Institute of Electrical and Electronics Engineers, August 2019. © 2019 IEEE. As Published. http://dx.doi.org/10.1109/icra.2019.8793944. Publisher. Institute of Electrical and Electronics Engineers (IEEE). Version. Author's final manuscript. Citable link. https://hdl.handle.net/1721.1/129074. Terms of Use. Creative Commons Attribution-Noncommercial-Share Alike. Detailed Terms. http://creativecommons.org/licenses/by-nc-sa/4.0/.

(2) Automatic Targeting of Plant Cells via Cell Segmentation and Robust Scene-Adaptive Tracking Ishara Paranawithana1 , Member, IEEE, Zhong Hoo Chau1 , Liangjing Yang2 , Member, IEEE, Zhong Chen3 , Kamal Youcef-Toumi4 , Member, IEEE, U-Xuan Tan1 , Member, IEEE. Abstract— Automatic targeting of plant cells to perform tasks like extraction of chloroplast is often desired in the study of plant biology. Hence, this paper proposes an improved cell segmentation method combined with a robust tracking algorithm for vision-guided micromanipulation in plant cells. The objective of this work is to develop an automatic plant cell detection and localization technique to complete the automated workflow for plant cell manipulation. The complex structural properties of plant cells make both segmentation of cells and visual tracking of the microneedle immensely challenging, unlike single animal cell applications. Thus, an improved version of watershed segmentation with adaptive thresholding is proposed to detect the plant cells without the need for staining of the cells or additional tedious preparations. To manipulate the needle to reach the identified centroid of the cells, tracking of the needle tip is required. Visual and motion information from two data sources namely, template tracking and projected manipulator trajectory are combined using score-based normalized weighted averaging to continuously track the microneedle. The selection of trackers is influenced by their complementary nature as the former and latter are individually robust against physical and visual uncertainties, respectively. Experimental results validate the effectiveness of the proposed method by detecting plant cell centroids accurately, tracking the microneedle constantly and reaching the plant cell of interest despite the presence of visual disturbances.. I. I NTRODUCTION Automation of micromanipulation systems advanced immensely due to the extensive research in vision-guided micromanipulation. Such automated micromanipulation systems greatly contribute towards the ease of operation, repeatability, consistency and improved accuracy in cell biomanipulation applications [1]. Plant cell micromanipulation has been gaining attention due to wide range of potential applications. Extraction of micro-organelles such as chloroplast from one plant cell and microinjection to a plant cell of another species to investigate the effects of biomanipulation is one such study. Although existing robotic micromanipulation *Research supported by SUTD-MIT International Design Centre (IDC). 1 Ishara Paranawithana, Zhong Hoo Chau and U-Xuan Tan are with the Pillar of Engineering Product Development, Singapore University of Technology and Design, Singapore (phone: +65 6303-6600; email: ishara paranawithana@sutd.edu.sg, zhonghoo chau@mymail.sutd.edu.sg, uxuan tan@sutd.edu.sg). 2 Liangjing Yang is with the Zhejiang University/University of Illinois at Urbana-Champaign Institute (ZJU-UIUC Institute), China (email: liangjingyang@intl.zju.edu.cn). 3 Zhong Chen is with the Natural Sciences and Science Education Academic Group, National Institute of Education, Nanyang Technological University, Singapore (email: zhong.chen@nie.edu.sg). 4 Kamal Youcef-Toumi is with the Mechanical Engineering Department, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA (email: youcef@mit.edu).. systems can readily work with single cell micromanipulation, the technology is limited from being deployed for plant cells. In current practice, most plant cell manipulation procedures rely highly on manual operation, hence the results are subjected to operator's experience and prone to judgement errors under demanding conditions. There is therefore a need to incorporate automatic vision-guided micromanipulation for plant cell applications to make them self-contained systems.. Fig. 1. Microscopic image of (a) an Elodea plant cell specimen (b) cell specimen and microneedle in a common field-of-view.. Unlike animal cells, multiple layers of cell arrays are usually present in plant specimens rendering unique structural properties and irregular shapes [2], [3]. Figure 1 illustrates a typical scene of a multi-layered plant cell specimen and the microneedle in a common field-of-view (FOV). Under these visual conditions, both detection/localization of the plant cells and uninterrupted visual tracking with a nonhomogeneous scene is extremely challenging. Effective cell segmentation methods for cell detection and localization are essential to automatically identify the targets in the plant cell specimen. However, general approach usually involves staining of the plant cells as a preprocessing step. The staining agents could potentially damage the fragile cell structures and have undesirable effects on the analysis of plant transformation. Furthermore, the process of staining requires additional knowledge about handling chemicals and tedious preparations which are beyond user level. Apart from the need for an improved plant cell segmentation method, this work is also motivated by the importance of having a robust tracking algorithm to provide uninterrupted and timely feedback for vision-based control in cell micromanipulation. The visual disturbances associated with plant cells include moving micro-organelles, scene cluttering and occlusion of the microneedle tip. Most traditional vision-based approaches are deemed to fail or produce unsatisfactory results under such visual uncertainties. These technical challenges made.

(3) the development of automatic vision-guided micromanipulation for plant cells fairly less explored despite having many important and potential applications [4], [5]. To solve the identified problems and address the technical gaps discussed, this work proposes an efficient and improved plant cell segmentation method that requires no staining or high-end sophisticated microscope to image the specimen. Then a robust tracking algorithm based on the fusion of visual and motion tracking information is incorporated in to the unified track-servo workflow to guide the microneedle to a desired target. By developing an improved version of watershed segmentation method with adaptive thresholding, we aim to accurately detect and localize the plant cells of interest. The identified center locations of the plant cells of interest are subsequently treated as the target points. The fusion mechanism combines two complementary sources of tracking information to overcome the visual uncertainties in irregular and non-homogenous plant specimen. The normalized weighted averaging-based fusion readily adjusts the weightage inferred from the similarity score of the template match in the presence of visual disturbance, thus maintaining uninterrupted tracking of the needle tip during plant cell manipulation. While our current work focuses on the assisted targeting and unified track-servo component, it also complements our long-term research goal of extending uncalibrated self-contained vision-guided micromanipulation for onsite plant cell studies. In Section II, the relevant state-of-the-art development is reviewed to identify the gaps in the existing systems in cell micromanipulation. A detailed discussion of the proposed method on plant cell segmentation and fusion-based tracking is presented in Section III. Section IV outlines the experimental setup followed by the results and discussion of the experiments in Section IV. Finally, the paper concludes by highlighting the contribution and significance of the proposed work. II. L ITERATURE R EVIEW This section reviews the relevant existing work, including our previous work, in vision-guided micromanipulation under three sub themes. First, we discuss about the current state-of-the-art development in robotic vision-guided micromanipulation to identify the limitations and gaps in existing systems. Second, we survey the prevailing methods used in plant cell segmentation and examine the challenges and problems associated with them. Third, some recent developments in the context of plant cell micromanipulation will be reviewed to highlight the need for extending visionguided micromanipulation for plant cell applications. The extensive research efforts in the domain of visionguided manipulation have greatly accelerated the development of automatic cell micromanipulation systems [6], [7], [8]. A typical vision-based control approach involves system calibration [9], [10], [11], [12], [13], [14] where a general relationship is established between the manipulator actuation task space and the microscope imaging plane. However, there exists research work [6], [7] which does not. require tedious and time-consuming explicit calibration. This calibration-less feature is especially useful for onsite plant cell studies as it offers great ease in setup and operation to the user. Hence, we developed a low-cost and portable platform [15] exploiting uncalibrated [16], self-initializing [17] vision-guided micromanipulation in our previous work. While these methods greatly contribute towards fully automatic vision-guided micromanipulation, it is challenging to maintain uninterrupted tracking when the microneedle interacts with the cell specimen. Therefore, we designed a self-reinitializing and -recovery [18] method that detects the correct tracking mode to use based on the detected cell position and geometry. Detection and localization of the specimen is done for single cell procedures using the previously developed circle Hough transform method [19]. Although this is an important step towards enhancing the automated micromanipulation workflow, it is not possible to readily generalize the concept for multi-layered plant cells with complex structural properties. Detection and localization of cells has long been a topic of research interest. Many researchers attempted different segmentation techniques such as local thresholding [20], region growing [21], context-aware spatial tracking [22], edge-based deformable template [23] and region-based morphological filtering [24]. However, most conventional methods require staining of the cells as a pre-processing step to improve the contrast of the cell boundaries. Staining-based cell segmentation methods [20], [22] usually offer good results, but could lead to undesirable study outcomes due to various reasons. Use of staining agents could cause potential damage to the fragile plant cell structures and may have adverse effects on the plant transformation studies. Watershed transform is a well-established method used in plant cell segmentation [25], [26], [27], [28]. Nevertheless, improper selection of parameters lead to over segmentation problem. Essentially, fine tuning of the parameters by empirical means is required to avoid the problem of over segmentation. This calls for an elegant technique to automatically adjust the threshold values according to the microscopic specimen image under study. Han et al. carried out some notable research work on plant cell micromanipulation. One such interesting work is the development of plant cell microinjection technique based on autofocusing of the microneedle tip [29]. This work mainly focuses on aligning the microneedle on the plant cell imaging plane for microinjection by estimating the missing depth information. Zhang et al. developed a modified 2D-to-2D SSD-based feature tracking method for small cell microinjection [30]. This method requires explicit calibration prior to tracking the microneedle. In most cases, microinjection strategies are not executed in a vision feedback-based trackservo mechanism with an uncalibrated micromanipulation setup. In our recent work, we developed a fusion-based tracking algorithm incorporating both vision and motion information [31] to mitigate the effects of visual and physical uncertainties under challenging plant cell manipulation conditions. Despite the interest shown in plant cell manipulation applications, none of these works addressed the latent need.

(4) for automatic plant cell segmentation to assist vision-guided micromanipulation. Therefore, our current work is motivated by the mentioned innate challenges associated with the plant cells and the limitations identified in the existing systems. III. M ETHODOLOGY A. Conceptual Overview. Fig. 2. Overview of the operational workflow of automatic vision-guided micromanipulation for plant cells.. The proposed work aims to improve the automated plant cell manipulation workflow by combining our previous work on fusion of vision and motion tracking with further capabilities of automatic plant cell segmentation. The assisted targeting component which consists of watershed segmentation along with adaptive thresholding is performed only once for a single image frame to automatically identify and localize desirable plant cells. To maintain continuous tracking of the microneedle tip despite the complex plant cell scenes, a scene-adaptive tracking algorithm that is robust against both visual and physical uncertainties is incorporated. This is realized through a fusion of vision data inferred from template matching and motion information generated from micromanipulator trajectory data. Figure 2 provides a graphical illustration of the automatic vision-guided micromanipulation workflow for plant cells. The proposed work is demarcated by the solid black line. Watershed transformbased plant cell segmentation and scene-adaptive fusionbased tracking mechanism will be further discussed in the following subsections. B. Plant Cell Segmentation. The plant cell segmentation process is summarized in Figure 3. We use a contrast-based watershed segmentation with adaptive thresholding method, to detect the plant cells and annotate the centroid of each plant cell. Our method neither requires staining of the cells nor the additional information of known models. The light-weight processing methods adopted here are specially advantageous for field deployment applications. The grayscale image first goes through a preliminary grayscale rescaling stage to enhance the contrast based on the pixel values. In order to detect the plant cells in a field-of-view partially filled with the specimen, we introduce a threshold limiter prior to implementing the adaptive thresholding phase. An image with the bright background exposed will always have an additional peak on the right side of its histogram. This will essentially increase the threshold value and reduce the visibility of the pixels of lower intensity. Therefore, the threshold limiter is used to limit the impact of the exposed background, thereby improving the sensitivity of the subsequent phases for effective plant cell segmentation. We define our threshold limiter as κ∗adj = min(κ∗ , δ); δ ≤ 0.5. (1). ∗. where κ is the normalized threshold value derived using Otsu's method [32]. δ is limited to 0.5 based on the assumption that an ideal image will have a histogram of normal distribution, therefore half of the pixels will have intensity below the threshold value. The adjusted threshold is used subsequently to derive a thresholding bracket, according to γadj = [η α κ∗adj ,. κ∗adj ] ηα. (2). where α, η are the threshold tuning parameter and the effective metric of the normalized threshold value, respectively. γadj is used to further enhance the definition of the boundaries of the plant cells through the contrast enhancement, binarization, and dilation stages accordingly, prior to the watershed transformation for cell segmentation. Watershed transform achieves cell segmentation by “flooding” the pixels up to the cell boundaries, which have the highest pixel intensity after an inversion. The area of each segment is calculated and filtered based on a known dimension in the pixel domain, such that AIcell = rp [β · Acell,min , Acell,max ]. (3). where AIcell , rp , β are the area of the cell in image domain, the ratio of the total number of pixels to the area of field-ofview, and cell size limit parameter, respectively. The location of the centroids can then be derived based on the accepted size of the segments. C. Fusion-Based Tracking. Fig. 3. Step-by-step representation of the proposed plant cell segmentation method.. Vision-guided micromanipulation in 3D space is realized by combining template-based tracking (in x-y axes) and score-based adaptive depth compensation (in z-axis) leveraging our previous work on uncalibrated micromanipulation.

(5) platform. Interested readers are encouraged to refer to the self-initializing and unified track-servo (DFTS) framework [17] for more details. To integrate motion information into tracking framework, homography-based projective transformation is estimated by using the manipulator trajectory and the corresponding image points. The estimated homography matrix is then used to map the successive trajectory data to the image pixel coordinates. Thereafter, the projected trajectory and visual tracking data are fused together by means of score-based normalized weighted averaging. 1) Template Matching: After following the detect and focus steps in our DetectFocus-Track-Servo (DFTS) workflow, a suitable region of interest (ROI) containing the focused microneedle tip is automatically selected as the tracking ROI. The registered ROI is then treated as the base template and compared against the patches of neighbourhood pixels in the successive image frames. The normalized cross-correlation coefficient w(u, v) at image coordinates (u, v) of a template patch g(p, q) and the image f (p, q) for a P × Q patch and U × V image is expressed as Q P P P G F. w(u,v) = ". p=0 q=0 Q P P P 2 G. . p=0 q=0. Q P P P 2 F. #0.5 , (4). p=0 q=0. Fig. 4. Homography-based mapping of manipulator trajectory to image pixel coordinates.. count, the threshold value τ is assigned as 6 pixels. Notation Q denotes the projection of planar trajectory data x S robot by an estimated 3 by 3 homography matrix H proj [33], [34]. The unnormalized pixel coordinates of the microneedle tip in the imaging domain are represented by variables i, j and k such that . Pn. x. S robot : Hproj. . p = (i − k.u)2 − (j − k.v)2. (5). is greater than the threshold value τ . By taking the control precision of our vision-guided micromanipulation into ac-. T. = H proj x S robot .. (6).     threshold norm u S camera , Π x S robot : Hproj , τ (7). converges, an optimal homography matrix producing the maximum number of inliers for a projective transformation relationship is obtained. Finally, the estimated H proj is used to transform x S robot to u S robot to form a cooperative fusion mechanism which will be discussed in the next subsection. 3) Similarity Score-Based Data Fusion: The proposed fusion mechanism makes use of similarity score inferred from template matching as the confidence measure of the visual tracking. The score of the template match is a good indication to detect the event of visual tracking failure as the score naturally falls when subjected to scene uncertainties. This is typically the case when the tracking ROI of the microneedle tip is occluded by complex plant cell structures. The weight of the linear combination of two sources of trackers, namely projected trajectory data of the manipulator and template tracking is adjusted accordingly using the similarity score. Therefore, the result is an estimation of the microneedle tip position based on the normalized weighted averaging of visual tracking and projected manipulator trajectory data. The normalized weighted average of the estimates for a pair of corresponding tracked data points (u S robot ,u S camera ) is expressed as u.  Q norm u S camera ,. j/k. The value of N is updated dynamically based on the desired confidence level, as discussed in the original RANSAC algorithm [34], [35]. By iteratively updating the refined transformation matrix H proj until the distance metric M of n points, M=. . where G = g(p, q) − g¯ and F = f (p + u, q + v) − f¯(u, v). Notation g¯ and f¯ represent the mean intensity value in the template and the overlapping patch, respectively. The template matching is performed over the sequence of images to localize the microneedle tip position and track its motion. 2) Homography-Based Trajectory Data Projection: The microscope camera and the micro-stages are rigidly mounted with respect to one another in our experimental setup. Hence, we assume that a single homography can be estimated with a projective transformation [33]. The intention of estimating a homography is to establish a relationship between the two planes using initially known point correspondences. The homography needs to be computed only once, and then the subsequent trajectory data is mapped from the actuation task space to camera image plane. The mapping of planar trajectory data x S robot from the task plane P x to the corresponding pixel coordinate u S robot in image plane P u is depicted in Figure 4. The homography matrix is initially set to zeros and iteratively resolved until a satisfactory number of inlier point pairs can be mapped. Four pairs of non-collinear points are randomly selected for a projective transformation. For N number of randomly selected samples, point pairs are removed if the algebraic distance metric. i/k.   S = 1 − w H proj x S robot + w u S camera. (8). The final estimate u S is used to provide a timely feedback to the motion control loop in our vision-guided micromanipulation workflow. The use of complementary data sources.

(6) minimizes the adverse effects of different kinds of uncertainties during microneedle tip tracking. The projected trajectory data of the manipulator is robust against visual disturbances whereas the template-based tracking is less susceptible to physical and mechanical uncertainties. IV. E XPERIMENTAL S ETUP Figure 5 shows the setup of the low-cost and portable micromanipulation platform [15] used to carry out the experiments. The first micromanipulator arm holds the microneedle to perform vision-guided micromanipulation on plant cells while the second arm is installed to focus the plant cell specimen. This portable setup is a realization of our vision towards extending vision-guided micromanipulation for plant cell studies without restricting the study to specific laboratory conditions.. Fig. 5.. Low-cost and portable experimental setup.. Three-axis actuated micro-stages (8MT173; Standa Ltd., Lithuania) with 1.25µm/step resolution and 20mm working range in each axis are used to execute the manipulation of the microneedle. Actuation of the micro-stages is achieved by using a dedicated multi-axis controller (8SMC4; Standa Ltd., Lithuania). The vision-based control is carried out at a speed of 12.5µm/s. In our experiments, a sample of freshwater weed (Elodea) which has two layers of cell arrays is used as the plant cell specimen. The width and the length of each cell is approximately 20-30µm and 60-130µm respectively. A microneedle (Spiked Injection; Research Instruments Ltd., United Kingdom) with an inner diameter of 5µm and a tip angle of 35°is used for our experiments. Microscopic images of the plant specimen with the resolution of 640x480 pixels are acquired at 30 frames-per-second using a portable digital USB microscope (AM4515T8 Dino-Lite Edge Series, AnMo Corp., Taiwan). A fixed magnification of 860x is used to image both plant cells and microneedle in a common scene while covering a significant portion of the field-of-view with plant cells. V. R ESULTS AND D ISCUSSION The experiments were carried out firstly to demonstrate the feasibility of plant cell segmentation with demanding working conditions including plant cells partially filling. the microscopic image and having slightly out-of-focus cell specimens under microscope. Both these situations are practical scenarios observed in plant cell micromanipulation studies. Secondly, the location information acquired from cell segmentation method is used further to guide the microneedle to the cells of interest. The main objective of the experiments is to demonstrate autonomous detection and localization of plant cells and track the microneedle accurately to provide timely feedback to perform automatic vision-guided micromanipulation. This is achieved under challenging experimental conditions including complex plant cell structures, continuously moving micro organelles such as chloroplast and full occlusion of the microneedle by plant cells. A video demonstration covering the automated operational workflow of vision-guided micromanipulation for plant cells is also presented. A. Automatic Plant Cell Segmentation In this section, the results of automatic plant cell detection and localization are presented. The proposed watershed segmentation method with adaptive thresholding shows satisfactory results in general. To assess the detection rate, the process was carried out on a video stream while the plant specimen is stationary. In this trial, it is observed that the detection rate is 89.1% (440 of 494 tracked frames). The images of plant cells with partially filled FOV cause adverse conditions and make automatic cell detection more challenging. The brighter background normalizes the intensity values of pixels associated with plant cells. The use of thresholding values alone is likely to be ineffective under such conditions. However, our algorithm is improved to tackle this challenging yet highly practical situation.. Fig. 6. Cell detection results on an image partially filled with plant cells (a) segmented image with the use of the threshold limiter, (b) detection of the plant cell centroids (red dots).. The maximum threshold limiter value, δ, is fine-tuned to be 0.475 based on previous experimental results, that offers improved segmentation outcome as compared to the ceiling value of 0.50. For a grayscale image with field-ofview partially filled with plant cells, the noise interfering the cell detection is significantly larger, reducing the effective area of each segment after the watershed transformation. This causes the cell boundary fine segments to increase in quantity, therefore reducing the size of the actual detected cell segments. We used a reduced cell size limit parameter, β = 0.35, to ensure that most of the detectable cells are registered for the subsequent operations..

(7) Even though the watershed transform has successfully segmented the cells to a highly satisfactory level, as illustrated in Figure 6, our algorithm could not segment out some of the cells, causing a lesser number of cell centroids to be registered. This could be partly due to the shadow of the boundaries of adjacent plant cell layers, casting a vaguely defined “boundary” to the cell, sub-splitting the plant cells. This causes the effective area of the segmented cells to further reduce, even though a lower cell size limiter is imposed. The shadow of the adjacent layer has also affected the detection of the cell centroids, shifting some of the centroids closer to the boundaries of the cells. However, as the interference of the adjacent layer of plant cells is highly random, further investigation in singling out the target layer of plant cells is necessary. Nevertheless, the improved cell detection algorithm shows promise in segmenting the plant cell centroids even in a field-of-view that is only partially filled with plant cells. Furthermore, the relaxation in the needs of a sharp image allows us to control and navigate the microneedle with higher level of confidence. B. Fusion-Based Tracking Qualitative evaluations are presented in this section based on the inspections of visually tracked microneedle tip. A detailed discussion is provided highlighting the robustness of fusion-based tracking method which incorporates both visual and motion information. After completing automatic cell segmentation, the detected cell centroids of interest are fed into the visual control loop, so that microneedle can be manipulated to a desired target point. Out of the many correctly detected cell centroids, three prominent centroids are selected to execute vision-based control using the proposed fusion-based tracking method. The tracking performance of the tool tip is under highly irregular and non-homogeneous plant cell specimen as illustrated in Figure 7. At the start, the microneedle tip is clearly visible and free of any occlusion. However, as the needle approaches the detected centroids namely, data point 1, 2 and 3, the tip of the microneedle undergoes severe occlusion. The tracked path of the needle shows the exceptional capabilities of fusion-based tracking under adverse visual scenes. The visual tracking error as illlustrated in Figure 7 remains within ±3 pixels at all times. The error is comparable to the micromanipulator's inherent uncertainty of 1.25µm (=3.57 pixels). An image sequence of the tracking workflow that is recorded separately but similar to previous experiment is shown in Figure 8. It can be observed that the proposed scene-adaptive tracking mechanism is able to track the microneedle tip position accurately in all frames of the sequence. It is also noteworthy to mention that needle tip is fully occluded and barely visible when it advances towards the target centroids as seen in Figure 8(c)-(f). Most pure vison-based approaches, including template tracking tend to make inaccurate estimates or completely fail when distinct visual features are not present. However, our proposed method overcomes such limitations in conventional pure vision-based methods by leveraging manipulator trajectory. Fig. 7. Composite image of the fusion-based tracking; red crosses and blue squares denote the tracked trip positions and detected cell centroids, respectively.. information. Apart from the visual disturbance, the intrinsic uncertainty of the micromanipulator and physical uncertainties such as microneedle deflection and vibrations could also cause inaccuracies in tracking. The observations from the results suggest that fusion-based tracking makes accurate estimates of the microneedle tip by combining two sources of trackers robust against both visual and physical uncertainties.. Fig. 8. Image sequence of vision-guided micromanipulation; (c-f) fusion mechanism consistently tracks the needle tip despite the full occlusion.. VI. C ONCLUSION While leveraging the previously developed low-cost, portable micromanipulation platform with uncalibrated and self-initializing capabilities, this work mainly focuses on the requirement of developing an automatic cell segmentation method that complements our proposed scene-adaptive fusion-based tracking. The heart of our solution is a contribution towards completing the automatic vision- guided micromanipulation workflow for plant cell studies under visually challenging working conditions. Cell segmentation is carried out under adverse conditions involving multiple layers of cells arrays and partially filled field-of-view while the vision-guided manipulation of the microneedle is executed with a fully occluded needle tip by the plant cells. The results demonstrated the potential of using automatic plant cell detection to identify cell centroids and subsequently manipulate the microneedle to a target of choice..

(8) R EFERENCES [1] J. P. Desai, A. Pillarisetti, and A.D. Brooks, “Engineering approaches to biomanipulation,” in Annu Rev Biomed Eng, vol. 9, pp. 35-53 2007. [2] P. Barbier de Reuille, I. Bohn-Courseau, C. Godin, and J. Traas, “A protocol to analyse cellular dynamics during plant development,” in The Plant Journal, vol.44, no. 6, pp. 1045-1053, 2005. [3] T. Kunkel, “Microinjection into plant cells of etiolated seedlings,” in Available: http://www.labonline.com.au/content/lifescientist/article/microinjection-into-plant-cells-of-etiolated-seedlings981244058. [4] G. Neuhaus, and G. Spangenberg, “Plant transformation by microinjection techniques,” in Physiologia Plantarum, vol. 79, no. 1, pp. 213217, 1990. [5] M. Y. A. Masani, G. A. Noll, G. K. A. Parveez, R. Sambanthamurthi, and D. Prufer, “Efficient transformation of oil palm protoplasts by PEG-mediated transfection and DNA microinjection,” in PLoS One, vol. 9, no. 5, 2014. [6] Y. Sun and B. J. Nelson, “Biological cell injection using an autonomous microrobotic system,” in The International Journal of Robotics Research, vol. 21, no. 10-11, pp. 861-868, 2002. [7] Y. Sun and B. J. Nelson, “Microrobotic cell injection,” 2001 IEEE International Conference on Robotics and Automation (ICRA), pp. 620-625, 2001. [8] W. Wang, X. Liu, D. Gelinas, B. Ciruna, and Y. Sun, “A fully automated robotic system for microinjection of zebrafish embryos,” in PLoS One, vol. 2, no. 9, pp. e862, 2007. [9] L. S. Mattos and D. G. Caldwell, “A fast and precise micropipette positioning system based on continuous camera-robot recalibration and visual servoing,” 2009 IEEE International Conference on Automation Science and Engineering (CASE), pp. 609-614, 2009. [10] J. Bert, S. Demb´el´e, and N. Lefort-Piat, “Performing weak calibration at the microscale, application to micromanipulation,” 2007 IEEE International Conference on Robotics and Automation (ICRA), pp. 4937-4942, 2007. [11] Y. Zhou and B. J. Nelson, “Calibration of a parametric model of an optical microscope,” in Optical Engineering, vol. 38, no. 12, pp. 19891996, 1999. [12] M. Ammi, V. Fr´emont, and A. Ferreira, “Automatic camera-based microscope calibration for a telemicromanipulation system using a virtual pattern,” in IEEE Transactions on Robotics, vol. 25, no. 1, pp. 184-191, 2009. [13] M. Ammi, V. Fr´emont, and A. Ferreira, “Flexible microscope calibration using virtual pattern for 3D telemicromanipulation,” 2005 IEEE International Conference on Robotics and Automation (ICRA), pp. 3888-3893, 2005. [14] G. Li and N. Xi, “Calibration of a micromanipulation system,” 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1742-1747, 2002. [15] L. Yang, I. Paranawithana, K. Youcef-Toumi, and U.-X. Tan, “Automatic vision-guided micromanipulation for versatile deployment and portable setup,” in IEEE Transactions on Automation Science and Engineering, vol. 15, no. 4, pp. 1609-1620, 2018. [16] L. Yang, K. Youcef-Toumi, and U.-X. Tan, “Towards automatic robotassisted microscopy: An uncalibrated approach for robotic visionguided micromanipulation,” 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5527-5532, 2016. [17] L. Yang, K. Youcef-Toumi, and U.-X. Tan, “Detect-Focus-Track-Servo (DFTS): A vision-based workflow algorithm for robotic image-guided micromanipulation,” 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 5403-5408, 2017. [18] L. Yang, I. Paranawithana, K. Youcef-Toumi, and U.-X. Tan, “Selfinitialization and recovery for uninterrupted tracking in vision-guided micromanipulation,” 2017 IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 1127-1133, 2017. [19] I. Paranawithana, W. X. Yang, and U.-X. Tan, “Tracking extraction of blastomere for embryo biopsy,” 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 380-384, 2015. [20] T. Kawase, S. S. Sugano, T.Shimada, and I. Hara-Nishimura, “A direction-selective local-thresholding method, DSLT, in combination with a dye-based method for automated three-dimensional segmentation of cells and airspaces in developing leaves,” in The Plant Journal, vol. 81, no. 2, pp. 357-366, 2015. [21] Q. Liu, L. J. Zhang, and X. P. Liu, “Microscopic image segmentation of Chinese herbal medicine based on region growing algorithm,” in Advanced Materials Research, vol. 756, pp. 4110-4115, 2013.. View publication stats. [22] A. Chakraborty, and A. K. Roy-Chowdhury, “Context aware spatiotemporal cell tracking in densely packed multilayer tissues,” in Medical Image Analysis, vol. 19, no. 1, pp. 149-163, 2015. [23] A. Garrido, and N. P. De La Blanca, “Applying deformable templates for cell image segmentation,” in Pattern Recognition, vol. 33, no. 5, pp. 821-832, 2000. [24] C. W¨ahlby, I. M. Sintorn, F. Erlandsson, G. Borgefors, and E. Bengtsson, “Combining intensity, edge and shape information for 2D and 3D segmentation of cell nuclei in tissue sections,” in Journal of Microscopy, vol. 215, no. 1, pp. 67-76, 2004. [25] G. Fernandez, M. Kunt, and J. P. Zrd, “A new plant cell image segmentation algorithm,”, International Conference on Image Analysis and Processing, pp. 229-234, 1995. [26] S. Beucher, “The watershed transformation applied to image segmentation,” in SCANNING MICROSCOPY-SUPPLEMENT, pp. 299-314, 1992. [27] K. Mkrtchyan, D. Singh, M. Liu, V. Reddy, A. Roy-Chowdhury and M. Gopi,“Efficient cell segmentation and tracking of developing plant meristem,” 2011 IEEE International Conference on Image Processing (ICIP), pp. 2165-2168, 2011. [28] Z. H. Chau, I. Paranawithana, L. Yang and U.-X. Tan, “Plant cell segmentation with adaptive thresholding,” 2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), pp. 1-6, 2018. [29] M. Han, Y. Zhang, C. Y. Shee, T. F. Chia, and W.T. Ang, “Plant cell injection based on autofocusing algorithm,” 2008 IEEE Conference on Robotics, Automation and Mechatronics (RAM), pp. 439-443, 2008. [30] Y. Zhang, M. Han, C. Y. Shee, and W. T. Ang, “Automatic vision guided small cell injection: feature detection, positioning, penetration and injection,” 2007 International Conference on Mechatronics and Automation (ICMA), pp. 2518-2523, 2007. [31] I. Paranawithana, L. Yang, Z. Chen, K. Youcef-Toumi, and U.-X. Tan, “Scene-adaptive fusion of visual and motion tracking for visionguided micromanipulation in plant cells,” 2018 IEEE International Conference on Automation Science and Engineering (CASE), pp. 1434-1440, 2018. [32] N. Otsu, “A threshold selection method from gray-level histograms,” in IEEE Transactions on Systems, Man and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979. [33] G. Bradski and A. Kaehler, “Learning OpenCV: computer vision in C++ with the OpenCV library,” in O’Reilly Media, Inc., 2013. [34] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” in Readings in Computer Vision, ed: Elsevier, pp. 726-740, 1987. [35] P. H. Torr and A. Zisserman, “MLESAC: A new robust estimator with application to estimating image geometry,” in Computer Vision and Image Understanding, vol. 78, no. 1, pp. 138-156, 2000..

(9)

Figure

Fig. 1. Microscopic image of (a) an Elodea plant cell specimen (b) cell specimen and microneedle in a common field-of-view.
Fig. 3. Step-by-step representation of the proposed plant cell segmentation method.
Fig. 4. Homography-based mapping of manipulator trajectory to image pixel coordinates.
Figure 5 shows the setup of the low-cost and portable micromanipulation platform [15] used to carry out the  experi-ments
+2

Références

Documents relatifs

[14] to obtain a coarse alignment of the painting to the 3D model by view-sensitive retrieval; (ii) we develop an ICP-like viewpoint refinement procedure, where 3D surface orienta-

Evaluation of the impact of the registration resolution parameter R on the quality of segmentation as measured by structure mean DSM on the Visceral training dataset CTce_ThAb..

The main forms of mobilities involved are the circulation of people, practices and ideas, the circula- tion of building types, the circulation of different kinds of media, such

Furthermore during the linking process it is unnecessary to create code elements with only one sub-code element because if the algorithm did not find any connected element on the

Krstic, “Delay-adaptive full-state predictor feedback for systems with unknown long actuator delay,” submitted to IEEE Transactions on Automatic Control..

However, if the initialization is not representative of the inner distribution (case of a very small sphere inside the object for in- stance), a coarse segmentation could be

Figure 6.16 shows image 4, an example of M1 subtype, and the difference between blast cell detection before (the circle targets between the blast cells) and after performing the

Kwon and Lee [30] decomposed the appearance model into multiple observation models and motion models (VTD) and exploited the results in a unify- ing tracker within a Baysian