Keywords
neuronavigation - augmented reality - camera devices - superimposition - tractography
Introduction
Image-guided neurosurgeries, so-called neuronavigation systems, were introduced into
neurosurgery in the late 1980s and early 1990s, earlier than they were introduced
into other surgical areas. These systems are becoming routinely used in many neurosurgical
procedures and have proven to be important neurosurgical tools.[1]
[2] The most popular type of neuronavigation system is the optical system (reflection
of infrared light).[3] One of the shortcomings of this optical system is that neurosurgeons must look away
from the surgical field to see the navigation display because the navigation monitors
are located far from the lens tubes of the microscope and monitors of the neuroendoscope.
A navigation system that can be used without movement of the surgeon's eyes away from
the surgical field would be a vast improvement.
Augmented reality (AR) is a recently developed technology that has the potential to
overcome the need to look away from the surgical field. It adds information to a real
environment with computers, and its usefulness has become gradually recognized in
the field. For application in the medical field, AR overlays a virtual image provided
by three-dimensional (3D) reconstruction of a computed tomography (CT) or magnetic
resonance imaging (MRI) image onto an actual video or image.[4]
[5]
[6]
Some neurosurgical research groups have published reports on AR navigation, but there
have been only a few papers on AR neuronavigation in clinical neurosurgical cases.[1]
[7]
[8]
[9] The system described by Kockro et al. required a special handheld probe with an
integrated lipstick-shaped camera.[1] The system described by King et al. required bone-implanted markers and a locking
acrylic dental stent.[7] The system described by Kawamata et al. required reference markers mounted on a
goggle-type frame.[8] The system described by Paul et al. required a surgical microscope, optical localizer
by means of light-emitting diodes, and dynamic reference frame attached to the patient's
head.[9] However, our AR neuronavigation system does not require these types of special equipment
and uses simple, commercially available camera devices. Unlike the above-mentioned
AR neuronavigation systems, our AR neuronavigation system can superimpose not only
tumors and vessels but can also perform tractography. The purpose of this study was
to evaluate the feasibility and effectiveness of an AR neuronavigation system based
on Web camera images.
Patients and Methods
The proposed system was applied in three patients (one with a glioblastoma and two
with convexity meningiomas). All three patients underwent operations with this navigation
system. We developed and validated the utility of this AR navigation system by superimposing
tumors and vessels that had been segmented in advance onto a Web camera image. Preoperative
segmentation and calibration of Web cameras was required.
Navigation System
Segmentation
The open-source software 3D Slicer (Brigham and Women's Hospital, Boston, Massachusetts,
USA) was used as the software platform. Thin-slice sagittal cerebral Gd-enhanced T1-weighted
magnetic resonance (MR) images of patients with attached fiducial markers were acquired
in Digital Imaging and Communications in Medicine (DICOM) format 1 day before the
operation. The MR images were acquired using a 1.5-tesla MRI scanner, and 200 axial
cerebral T1-weighted sequences of 1.2-mm thickness were acquired in DICOM format.
DICOM data were taken using 3D Slicer, and 3D models of tumors and vessels were made
and stored in Visualization Toolkit (VTK) file format as 3D surface models. For corticospinal
tractography, diffusion tensor imaging was acquired by a 3.0-tesla MRI scanner. We
created diffusion tensor imaging (DTI) tractography using the labelmap seeding function
of 3D Slicer. For labelmap seeding, we made tract fibers from a cerebral peduncle
and set up the posterior limb of the internal capsule as the region of interest.
Web Camera and Its Calibration
We used two different Web cameras in this study. One was a Web camera with 2 million
pixels (Qcam Pro 9000 QCAM-200S; Logicool Co., Tokyo, Japan), and the other was a
Web camera with 300,000 pixels (Qcam Connect; Logicool Co.). Both cameras' refresh
rates were 30 frames per second. We used the former by hand intraoperatively (handheld
type in Cases 1 and 3) ([Fig. 1A]) and mounted the latter on the assistant's head (headband type in Case 2) ([Fig. 1B]). Optical markers were attached to each Web camera. The open-source OpenCV library
(Willow Garage, Menlo Park, California, USA) was used to calibrate the Web cameras.
The first step of calibration was to extract the camera's intrinsic parameters using
a snapshot of a chessboard in 10 different positions ([Fig. 2]). The extrinsic parameters that showed a relationship between the 3D world coordinate
and camera coordinate were then calculated using a chessboard with optical markers.
We used the method of Zhang et al. for this camera calibration.[10]
Fig. 1 The two types of Web cameras used. Optical markers were attached to each Web camera.
(A) Handheld. (B) Headband.
Fig. 2 Chessboard for calibration. We extracted each Web camera's intrinsic parameters using
a snapshot of the chessboard in different positions. The extrinsic parameters that
showed a relationship between the three-dimensional (3D) world coordinate and camera
coordinate were then calculated using the chessboard with optical markers.
Registration
Our navigation system comprised 3D Slicer software, the infrared optical tracking
sensor Polaris (Northern Digital Inc., Waterloo, Canada), and Web cameras. A surgical
navigation system, StealthStation Treon plus (Medtronic, Coal Creek, Colorado, USA),
was used in addition to 3D Slicer. In the operating room, Web cameras with optical
markers were connected to the navigation system. Polaris was used for the position
sensor to detect and track the Web cameras. We made a point-based registration in
3D Slicer with fiducial markers[11] and a Medtronic reference table. We used a Medtronic reference frame connected with
the head fixation holder without a new frame. 3D Slicer displayed navigational information
on a 20-inch monitor divided into two windows. During the skin incision and craniotomy,
overlaid images were displayed with Web cameras. After being informed of the surgeon's
intention, the image-guided surgery team in our institute controlled this system.
Results
We were able to overlay these images in all cases ([Figs. 3], [4]). Before performing AR, the registration error was computed. The fiducial registration
errors were 1.79, 1.67, and 1.65 mm. Figure 2 shows the overlaid image of the tumor
and skin during the operation. Accuracy could not be evaluated because the tumor was
not on the surface, though it was roughly suitable in the outline of the external
ear and the skin in Case 1. In Cases 2 and 3, a tumor was present on the brain surface,
and the gap between the outline of the actual tumor and the outline of the created
tumor was visible. AR accuracy, measured with a paper ruler in the plane of the operative
fields, was ∼2 to 3 mm. This indicates that the system is suitable for clinical use.
Convexity meningiomas were present in two patients who were discharged without new
neurological deficits after total removal of the tumors. One patient with a glioblastoma
had no new neurological deficits, but tumor remained at the corpus callosum and inside
the resection cavity. This patient was discharged in good general condition after
radiochemotherapy (temozolomide 75 mg/m2 for 42 days + total 60 Gy).
Fig. 3 (A) Augmented reality navigation monitors using a handheld Web camera in Cases 1
and 2. Tumor (green) and skin (ochre) superimposed onto the patient before disinfection
in Cases 1 and 2. (B) Augmented reality navigation monitor using a headband type Web
camera in Case 2. Left: the virtual 3D graphical image. Right: the superimposed image.
Fig. 4 (A) Augmented reality navigation monitor using a handheld Web camera in Case 3. A
tumor (red) and motor tractography (green) were superimposed onto the patient's head
before disinfection and after dural incision. (B) Upper: dual three-dimensional (3D)
layout display in 3D Slicer. Lower: Distance between the bipolar tip and motor tractography
was measured.
Illustrative Case
This case involved a left parietal convexity meningioma in a 64-year-old right-handed
woman. She had undergone an operation for breast cancer 2 years previously. At admission,
she had no neurological deficits. She was positioned supine and underwent an operation
by left parietal craniotomy. The skin incision was marked directly in relation to
the expected subcortical tumor position. After the craniotomy, the tumor and corticospinal
tract were superimposed on the monitor. During tumor resection, the navigation monitor
displayed a dual 3D layout; that is, 3D was indicated from two different directions
in real time. The monitor indicated the distance between the tip of the bipolar forceps
and the corticospinal tract ([Fig. 4B]). The tumor was very close to the left corticospinal tract. During the operation,
we manually measured the distance from the tractography to the bipolar tip using the
measurement function of 3D Slicer, as shown in [Fig. 4B]. Using subcortical motor evoked potentials, the tumor was removed with only a small
amount remaining. The patient was discharged with no new neurological deficits.
Discussion
This is a clinical report on a new AR neuronavigation system for brain tumors adjacent
to the brain surface. A main advantage of this navigation system is that it consists
of the open-source software 3D Slicer and Web cameras: thus, any facility could easily
set up this system. 3D Slicer is superior to commercial systems in terms of expressing
3D images, facilitating a more intuitive understanding of 3D spatial relationships.
This 3D display function allows for real-time navigation while watching 3D displays
from two different angles, which is termed a 3D dual-layout display. In addition,
we can easily measure the distance between points of objects and surgical instrument
tips.
We superimposed segmented objects onto Web camera images on the monitor by connecting
a Web camera to 3D Slicer. We were able to not only overlay tumors and vessels but
also perform motor tractography, which differs from past reports on camera AR navigation.[1]
[12] Visualization of the corticospinal tract on AR display proved to be a useful tool
for the surgeon to avoid inadvertently damaging the tract. Moreover, the positions
of tumors, vessels, and tracts can be readily identified during preoperative planning
of the skin incision.
We used two types of Web cameras. We were able to move the handheld camera, but there
was some limitation in the range of movement because of the position of optical markers
attached to the camera. We were able to ascertain the depth of the lesion on the lateral
side.
However, the operation must be interrupted when the handheld camera is moved. The
headband type of camera can continuously track through the assistant's line of sight,
but the assistant's position is limited because of the position of the optical markers
and the fact that he or she cannot see the monitor. In addition, it is difficult to
move Web cameras without shaking them and taking pictures at the same eye level of
the operator.
3D Slicer has two features that differ from those of commercially available navigation
systems. One is related to the 3D images. 3D Slicer has notable advantages in displaying
objects in 3D space with very intuitive and customizable models. It can display arbitrary
cross-sectional planes in 3D space according to the position of surgical instruments
with customizable offsets. In addition, the 3D graphic objects such as tumors or vessels
are visualized by optimal volume- and surface- rendering techniques, the parameters
of which can be finely adjusted according to the surgeon's intention. The other feature
is related to the superimposition function of medical camera devices. Our system can
utilize not only Web cameras but also rigid neuroendoscopes and microscopes with optical
markers. The proposed system can also be applied to the navigation surgeries based
on neuroendoscopes and microscopes with high levels of camera-calibration techniques.[13] The depth perception problem can also be overcome by the dual-layout display of
3D Slicer.
Certainly, commercially available navigation systems such as BrainLab and Medtronic
can insert superimposed two-dimensional (2D) images into one optical channel of the
microscope. However, 3D spatial relationships are unclear on superimposed 2D images.
3D Slicer can provide virtual 3D images and identical superimposed images with various
view angles simultaneously ([Fig. 3B]). Virtual 3D images can allow surgeons to intuitively perceive the depth of lesions,
which is difficult to grasp on conventional superimposed images. In addition, target
lesions can be preoperatively evaluated by AR navigation with Web camera in a simpler,
less expensive manner.
It is very difficult to evaluate AR error in three dimensions, and no such evaluation
method has yet been established. We performed several steps to evaluate AR error in
three dimensions. The first step is to measure the phantom superimposed images at
different angles. The 3D error can be estimated from each angle's error.[14] The second step is to check the AR error on the patient's scalp image before beginning
the operations. In this study, we confirmed that the previously segmented scalp image
was not so far from the patient's scalp image at various angles. The final step is
to measure the targets in the operative fields. In this step, we measured the distance
between actual brain tumors and previously segmented tumors in two dimensions with
a paper ruler.
The AR navigation system still has some problems. First, it is difficult to accurately
judge the depth of tumors from 2D displays. Second, it is not suitable for deep tumors
because of the performance of Web cameras. Third, there is no established method for
precise measurement of AR error; although we can measure it in one plane, it is difficult
to accurately measure error in 3D. Fourth, the AR navigation monitor must be shown
to operators. The monitor position and timing of presenting AR images to neurosurgeons
must be considered so that eye movement is minimized.
However, the AR neuronavigation system has considerable potential in neurological
surgery. We have herein described our clinical experience using this system in the
operating room. In the future, we plan to continue the evaluation of its clinical
utility by using it in operations involving neuroendoscopes and operating microscopes.
Conclusion
AR technology was examined with Web cameras in neurosurgical operations. The proposed
navigation system may help surgeons to perform safe surgical procedures and confirm
their decisions. The results of this study suggested that this technology was useful
in clinical neurosurgical procedures, particularly for brain tumors close to the brain
surface.