Attention! I am Simon, Han YANG! Currently I am a Ph.D. Candidate majored in Computer Information Engineer(CIE) at Medical Micro Robotics Lab(MMRL), The Chinese University of Hong Kong, Shenzhen(CUHKsz) supervised by Prof. Zhuoran ZHANG. Before that, I received my B.S. degree and worked as a research assistant at Key Laboratory for Artificial Intelligence and Multi-Modal Data Processing of Department of Education of Guangdong Province in HKBU(Zhuhai) working with Prof. Amy ZHANG in 2023.
My research interests include robotics micromanipulation system, cell surgery, 3D reconstruction under microscopic environment.
You can email me at hayang12 [at] link [dot] cuhk [dot] edu [dot] cn.
Ph.D in Computer Information Engineer, 2023 ~ now
The Chinese University of Hong Kong, Shenzhen, China
BEng in Computer Science, 2019 ~ 2023
Hong Kong Baptist University(Zhuhai), China
Summer School in Medical Image Processing, 2022.07 ~ 09
Nanyang Technological University, Singapore
Obtaining three-dimensional information, especially the z-axis depth information, is crucial for robotic micromanipulation. Due to the unavailability of depth sensors such as lidars in micromanipulation setups, traditional depth acquisition methods such as depth from focus or depth from defocus directly infer depth from microscopic images and suffer from the poor resolution. Alternatively, micromanipulation tasks obtain accurate depth information by detecting the contact between an end-effector and an object (e.g., a cell). Despite its high accuracy, only sparse depth data can be obtained due to its low efficiency. This paper aims to address the challenge of acquiring dense depth information during robotic cell micromanipulation. A weakly-supervised depth completion network is proposed to take cell image and sparse depth data obtained by contact detection as input to generate dense depth map. A two-stage data augmentation method is proposed to augment the sparse depth data, and the depth map is optimized by a network refinement method. The experimental results show that the depth prediction error is less than 0.3 um, which proves the accuracy and effectiveness of the method. This deep learning network pipeline can be seamlessly integrated with the robotic micromanipulation tasks to provide accurate depth information.
Depth estimation in monocular microscopy poses significant challenges, particularly in estimating the depth of motile cells. The limited depth of field(DOF) of a microscope and the intrinsic movement of motile cells further add difficulties to the task. Traditional Depth from Focus (DFF) and Depth from Defocus (DFD) techniques are limited to time-consuming focus adjustments and complex dynamic defocusing models. Addressing these limitations, this paper models depth estimation as a multi-class classification task and introduces a fine-grained feature extraction block to discern subtle distinctions between focal planes. Our model achieves a 97.05% success rate in estimating the depth of motile sperm with a speed of 21 fps.
The robotic micromanipulation system setup, as shown in Figure, consists of an inverted microscope (ECLIPSE Ti2, Nikon Inc.) equipped with a motorized XY-stage (ProScan, Prior Scientific Inc.). This stage provides a range of 75 mm and a resolution of 0.01 mu m. A 3-DOF motorized micromanipulator (uMp-285, Sensapex Inc.) is integrated to hold a glass micropipette for contact detection to obtain sparse depth. The micropipette is made of glass tubes with a micropipette puller (P-97, Sutter Inc.) and has a tip with outer diameter of 500 nm and inner diameter of 300 nm. Visual feedback is supplied by a camera (Basler acA1920-40u Inc.), enabling image-based visual servo control for micropipette tip localization and detection of cell surface deformation. All user interface interactions and deep learning model computations are managed by a computer with a GPU 3070Ti.
With the rapid development of artificial intelligence, cancer cell classification recognition is also gradually intelligent. Although the classification and recognition methods for some common cancers are mature, Cervical Cancer, as a common cancer among middle-aged women, is often neglected in diagnosis because of the low incidence of malignant tumors. In recent years, the incidence of Cervical Cancer has been increasing year by year and has also gained attention. In this paper, an efficient deformable convolutional neural network system(HQNet) was constructed to identify Cervical Cancer cells with different degrees of development.
The Neural radiation field for scene rendering is gradually be- coming a widely used method in novel view synthesis task. With the gradual popularization of wind turbines, the requirement for integrity detection of wind turbines operating in natural or extreme environments is gradually increasing. In order to better complete the state detection of wind turbines, rendering real wind turbine scenes and 3D reconstruction of wind turbines become crucial. For the original neural radiation field method, the scene or object with a large number of features or complex textures is required. But for wind turbines, the surface is too smooth and texture-free, which creates blurring and ghosting in the scene. The method we use, called WTBNeRF is a network dedicated to wind turbine scene rendering and 3D reconstruction of wind turbines. We use conical truncated cone rays for more detailed coverage of individual pixel ranges than single pixel-centered rays, effectively reducing jaggedness and blur- ring in smooth, low-texture wind turbine scenes. At the same time, get- ting accurate camera pose for low-texture objects and scenes is also a challenge. We use a pre-trained camera pose estimation neural radiation field network to predict the camera pose of the data set, which reduces the requirement of knowing the real camera parameters of the data in advance. Moreover, we simplify the network structure in the network design, so that the network training time is also significantly improved, and the speed is about 10 times faster than NeRF for the multi-scale wind turbine dataset we produced.
Feel free to contact me :)