Introduction Surgical microscopes produce a stream of valuable image data during surgery, which
can be directly utilised for assisting systems. In our work, we investigate an image-processing
algorithm that is able to register movements within the field of view based on natural
features without the need for fiducials.
Method The setup consists of a surgical microscope, a camera and a computer with a frame
grabber to gain access to the images frame. Below the microscope a temporal bone model
was set up on a Stewart platform that allows for defined control of precise reference
movement. The model has been prepared for cochlear implantation. For tracking, the
algorithm identifies features in two consecutive images of the microscopes video stream.
The feature-shift allows for an estimation of the situs' movement relative to the
microscope. To evaluate the algorithms precision, precise displacements of the specimens
were introduced by the robotic systems as a reference. The algorithms output was compared
to this reference.
Results The average translational error for linear shift of the whole situs was 93.9μm with
a standard deviation of 118.4μm and is below the total registration error of ≤ 500μm proposed by Schipper et al. for navigational systems at the lateral skull base.
Discussion and conclusion We could show that our system is able to detect linear image-shift with only few
deviations. In the next steps we will investigate spiral movements, different phantoms
and the systems ability to detect moving objects within the field of view and the
translation into clinical settings. Of particular interest is the compensation of
unintended movements during robotic-assisted surgery and the tracking of moving objects
in the field of view.