Printer Friendly

Computerized 3D craniofacial landmark identification and analysis.


Craniofacial surgeons rely on landmarks to do analysis on craniofacial data such as measurement of distances, angles between anatomical landmarks, etc.; or in the case of forensic experts, landmarks provide starting points to recreate a face from the skull of a murdered victim for instance. Yet, landmarks identification and placement is more of an art than science, the tasks are tedious, time consuming and very often depends on ones previous experiences i.e. error-prone. Advances in scanning device technology brings along a multitude of non-invasive devices such as Computer Tomography (CT) and Magnetic Resonance Imaging (MRI), each capable of capturing fine details of internal organs and volumetric data (3D). The practitioner are now dealing with this huge and complex set of digital data, the challenge is how to use digital processor (the computer) to replace (and hopefully improve) the manual tasks of landmarks identification and placement.

Despite the proliferation of computer-assisted methods, practitioners are still facing difficulties of either accessing or using them effectively. Accessibility is normally limited to those who can afford as many of these systems were developed for commercial purpose, hence they come as a package which require high investment to purchase both the software and the high-end hardware to run the system. The proprietary nature of the system hinder further improvement to the system as users now has to abide to the licensing issue. Furthermore, effective usage of such monolithic system is dampened by the variety of unimportant features and functionalities (features bloat) which the users rarely used. This has motivated us to develop a computer-assisted 3D landmark identification and analysis system, which is widely accessible (based on open-source platform) and pragmatic for the common tasks performed by craniofacial experts.

The aim of this study is thus to describe the development of an efficient and convenient method for identifying craniofacial landmarks on CT generated data using Visualization Tool Kit (VTK) by Kitware, Inc


Craniofacial Landmark identification research is intensively conducted by experts from both computer science and related medical science fields. Here we briefly review two computer-assisted systems for landmark identification and analysis, namely MIMICS and CASSOS. A group of researchers have previously conducted landmark placement analysis based on the anatomical regions. The approach uses 3D CAD files for the visualization and landmark placements [3]. The emphasis was to setup a sizeable craniofacial database, which then feed the data to medical imaging software application such as MIMICS for landmark identification and measurement. The MIMICS system translates the scanned data into CAD formats, and then the data can be processed for purposes such as visualization, segmentation, model construction and also landmark manipulation. This approach highly relies on the MIMICS software environment, and thus inherits some of the limitations of commercial system as highlighted before. One of the major comments as noted by one of the researchers was the lack of flexibility for further development.

Computer Assisted Simulation System for Orthognathic Surgery (CASSOS) is a cephalometric analyses and surgical planning software. This system can be used for cephalometric measurement on X-ray images with a model based approach (Figure 1). Subject data is loaded in the system, and landmark placement is performed with reference to the general model. There are around 71 landmarks both on soft tissue and hard tissue. The user needs to pin point the landmarks following the order according to the landmark sequence number indicated in the model image. Both the position and sequence of the landmark are taken into consideration for the cephalometric analysis. A report will be produced with Eastman Analysis [4] on the landmarks.


The CASSOS system operates on 2D data, and the landmark identification is determined manually based on the mental mapping between the reference model image and the actual x-ray of a patient. Perhaps the main drawback of this system compared to MIMICS is that it only operates on 2D x-ray images.

No computer-assisted systems that we have surveyed so far (including MIMICS and CASSOS), detect landmarks automatically and directly from the raw data. Some offer semi-automatic approach by first extracting some features likes crest-lines, which will then form the basis to identify and place landmarks manually. This suggests that developing a fully automated system to identify landmarks remain a challenging research problem. In this part of the paper, we survey various techniques explored by the researchers on cephalometric analysis with craniofacial landmarks. Grau et al [5] proposed a method on landmark identification on 2D X-ray images. This 2D implementation is consisted of two phases which are line detection model and point detection model. Line detection uses zero crossings detection of the Gaussian Laplacian and for point detection, it uses mathematical morphology techniques. This method provides a very convincing landmark detection approach, but it suffers from high computational cost. A complicated neural network approach using machine learning to locate 25 commonly used landmarks is reported by Feghi [8]. It concerns on pattern, which requires a predefined contour map. Such approach requires high quality of images and hard to implement when there is a need for additional landmarks. Another research is reported on automated 2D cephalometric analysis on X-ray with reference of landmark identification [6]. The whole process is divided into two stages, which are training stage and recognition stages. In the first stage, image processing and pattern matching techniques is used to identify reference landmarks then, on the second stage, landmarks are located in target image with active shape model (ASM). A more recent research work is done on automatic localization of cephalometric landmarks on digitized 2D X-ray images [7]. The approach also uses reference cephalometric images. The target images are decomposed into several regions, each of which has 3 main control landmarks. A mapping is performed on the points on the target images to reference images by affine transform matrix. Landmark location correction is done by edge detection, image histogram and curve fitting. The study claims more than 90% accuracy.

This short survey on computer-assisted system and research on craniofacial landmarks identification highlighted several points. First, we noted that the nature of the data on which these systems and methods operate can be divided into either 2D or 3D. The recent interest seemed to focus on the latter data category. Second, automatic landmarks identification is still an open research question, an open invitation to focus more research efforts in this area before it is mature enough to be incorporated into fully automated system. In this paper we present a computerized landmarks identification system that addresses the first point (3D data), while the 3D interactive environment provides a convenient (and to some extend efficient) working space to detect and place landmarks on target skull.


Figure 2 shows the flowchart of our 3D landmarks identification and analysis system. It consists of five related phases, which are the data acquisition, visualization, data preprocessing, the reference model construction and the application of landmark placement.


The two boxes at the bottom of Figure 2 (with broken lines boundary) are not implemented in the current prototype system. They are included here to show the overall view of the system once it is fully completed.

A. Data Acquisition

The sample data sets used in this research are captured with a GE Lightspeed Plus CT Scanner System at the Department of Radiology, Hospital Universiti Sains Malaysia. The scans are conducted in axial manner. There are often 160 to 210 slices for a human head. The data produced by CT scanner are sent to GE Advantage workstation.


A GE Advantage workstation is a CT data analysis and repository system which is based on Sun s Solaris operating system. The CT data can be transferred through networks and saved in DICOM (Digital Imaging and Communications in Medicine) format for any local usage.

B. Visualization

There are many visualization tools available. One of those is Visualization Took Kit (VTK) from Kitware. It is powerful, open source subprogram library which is built in C++. Besides C++, it offers many wrapping programming languages such as JAVA, Python and Tcl/tk [8]. Programmers are able to choose the programming language which most suits their needs. VTK support multiple platforms which makes it more flexible for development. Output applications are more portable than others.

From data visualization for volumetric data, there are generally two methods which are iso-surface rendering and volume rendering. Each of which has their advantages and concerns. Iso-surface extraction in this study uses conventional Marching Cubes. The visualization pipeline is illustrated in figure 4

C. Data preprocessing

The actual DICOM data is fairly huge, which is inefficient for data loading and manipulation. Data preprocess reduces the data size by extracting the iso-surface of interest and sub sampling. The result is desirably small, yet aliasing.

We apply Gaussian Smooth to eliminate jaggies. The final data is saved in .vtk native data format.


D. Reference model construction

Model construction refers the process that builds some kind of perfect "average" human skull. There are two methods of dealing this issue. One idea is to apply cephalometric along with some computer graphics tools to get the reference model. Another approach is based on a database of human skull images. The collaboration with Hospital Universiti Sains Malaysia (HUSM) provides us relevantly enough data set to form reference model construction.

Due to the nature of the scan, not all skulls are in the same orientation. Osteometric Scaling is applied to construct a 3D Cartesian coordinate system to fit sample data. Coordinate system integration is done by our researchers by fitting the digitized craniofacial data into a standard position named Frankford Horizontal Plan (figure 5) [9, 10].


E. Landmark placement

A test application is created using Tcl/tk with functions of create a landmark, delete a landmark, save landmark set, load craniofacial data set and etc.

Here we briefly describe how to create a new landmark:

* A point is created under mouse point with the coordinate (px, py, pz).

* From 2D view, set px and py with current mouse location.

* Check whether a data cell is selected under current location on 2D screen. (On surface?)

* If no, quit. Else update the point coordinates to 3D coordinate values in the visual 3D environment.

* Add the point to landmark set list and process the pipeline to render actor (a small sphere) on the skull surface.

Another function is called in order to delete a landmark. This is done by pointing out the point from the landmark set list, and removes the corresponding small sphere actor.

These landmarks set are then saved into a .vtk format, as saved-landmark cloud. The next time when a user wishes to view the skull with landmark placements, the user can just load the landmarks based on the saved-landmark cloud earlier.

Hereby landmark clouds placed on a reference model is called landmark model. Landmark model can be placed as reference to surgeons, when a sample skull is loaded in the application with integrated coordinate system. Landmark model might not exactly suit the subject s head. A manual adjustment is required necessarily.


According to Feghi s paper [8], this study chooses a set of landmarks for the purpose of the reference model. These landmarks are selected and organized into a table based on the landmark clouds saved with our application, with the respective x-y-z coordinates.

Shown in Figure 6 is the snapshot of the landmark coordinates table with three subjects from the database.

In Figure 7 below shows a statistic analysis on the landmarks coordinates integrated with the patients landmarks. This snapshot is a portion of the full number of patients.



For the research trial, several sets of DICOM data are loaded to our application.

Figure 8 shows the visualization result of the digitized skull dataset, before the identification of landmarks is performed. The results are then shown in Figure 9 where landmarks are plotted based on a certain table model (shown in greens). Note that in this example, the Gaussian smooth technique was not applied.



Now, looking at another different sample dataset, Gaussian smooth technique is applied. Figure 10 shows the visualization result of the dataset before the identification of landmarks was performed. The results are then shown in Figure 11.



From the same skull dataset, we can conclude that the landmarks identification can be seen clearly in Figure 9 and Figure 11. Since we wanted the results of smooth and rough surfaces, the differences in these two results can easily be compared. Although these both are derived from the same patient s dataset, the differences of result are apparently obvious.

It is more interactive if the coordinates of the landmark plotted on the skull can be shown promptly after the placement. In VTK programming environment, there are four types of coordinate systems which are model, world, view and display. Here the coordinates of respective landmarks are shown in world coordinates. Notably, only the coordinates of the selected landmarks are shown. Here Figure 12 shows that the selected landmark is in a different colour (red) different from unselected landmarks (green).

Distances between landmarks are also features of interested in cephalometric analysis. The distance of two landmarks can be calculated with 3D Euclidian distance formula,

Dist = [square root of ([([x.sub.1] - [x.sub.2]).sub.2] + [([y.sub.1] - [y.sub.2]).sub.2] + [([z.sub.1] - [z.sub.2]).sub.2])]

where [x.sub.1] [y.sub.1] [z.sub.1] are coordinates for one points and [x.sub.2] [y.sub.2] [z.sub.2] for another. Volumetric calculation of Euclidian distance between any two landmarks can be also convertibly shown in our system. (Figure 13)



In this study, we look into how craniofacial visualization and landmark identification is achieved by using Visualization Toolkit (VTK) and wrapper language Tcl/tk. Also we manage to create program to identify and manipulate landmarks on the hard tissue of the digitized craniofacial data. The result is as expected.

In future, the landmark identification can be conducted in an automatic way. We have looked at the surface construction techniques, such as Jules Bloomenthal s polygonization of implicit surface [12]. By studying the surface forming algorithm, we try to extract the feature lines on the iso-surface of craniofacial data. Then landmark can be identified easily along the feature lines.


Thank you to Universiti Sains Malaysia for providing financial assistance in terms of USM Fellowship to one of the researchers.

Manuscript received 16 Feb 2009

Manuscript revised 14 May 2009


[1] Michelle Meadows, Computer-Assisted Surgery: An Update, FDA Consumer Magazine, U.S. Food and Drug Administration, Maryland, Volume 39, Number 4, July-August 2005.

[2] Lisa G. Brown, A survey of image registration techniques ACM Computing Surveys (CSUR) archive Volume 24, Issue 4, 1992.

[3] Wan Abdul Rahman Wan Harun, Zainul Ahmad Rajion, et al 3D CT Imaging for Craniofacial Analysis Based on Anatomical Regions Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference Shanghai, China, September 1 4, 2005

[4] A.M.Cohen, "Uncertainty in cephalometrics" British journal of orthodontics Volume 11, Issue 1, January 1984, Pages 44-48

[5] V. Grau, M. Alcaniz, M. C. Juan, et al, Automatic Localization of Cephalometric Landmarks Journal of Biomedical Informatics, Volume 34, Issue 3, 2001, Pages 146-156.

[6] W.Yue, D.Yin, Ch.Li, G.Wang, T.Xu, Automated 2-D Cephalometric Analysis on X-ray Images by a Model-Based Approach, IEEE Trans. Biomedical Engineering, Vol. 53, No. 8, pages. 1615-1623, August 2006

[7] Hadis Mohseni, Shohreh Kasaei, Automatic Localization of Cephalometric Landmarks IEEE International Symposium on Signal Processing and Information Technology 2007 page 396-401

[8] I. El-Feghi, M. A. Sid-Ahmed and M. Ahmadi, Automatic localization of craniofacial landmarks for assisted cephalometry Pattern Recognition, Volume 37, Issue 3, 2004, Pages 609-621.

[9] William J. Schroeder, VTK an Open-Source Visualization Toolkit, Kitware, Inc. 2001

[10] Zainul Rajion, Deni Suwardhi, et al Coordinate Systems Integration for Development of Malaysian Craniofacial Database Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference Shanghai, China, September 1-4, 2005

[11] Farkas, L.G. ed. Anthropometry of Head and Face. 2nd ed. New York: Raven Press. 1994

[12] H. Goto, Y. Hasegawa, and M. Tanaka, Efficient Scheduling Focusing on the Duality of MPL Representatives, Proc. IEEE Symp. Computational Intelligence in Scheduling (SCIS 07), IEEE Press, Dec. 2007, pp. 57-64, doi:10.1109/SCIS.2007.357670.

Pan Zheng, Bahari Belaton, Rozniza Zaharudin, Arash Irani

School of Computer Sciences

Universiti Sains Malaysia

Penang Malaysia


Zainul Ahmad Rajion

School of Dental Sciences

Universiti Sains Malaysia

Penang Malaysia

COPYRIGHT 2009 College of Information Technology, Universiti Tenaga Nasional
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Zheng, Pan; Belaton, Bahari; Zaharudin, Rozniza; Irani, Arash; Rajion, Zainul Ahmad
Publication:Electronic Journal of Computer Science and Information Technology (eJCSIT)
Article Type:Report
Date:May 1, 2009
Previous Article:What computer chess still has to teach us: the game that will not go.
Next Article:A review and development of agent communication language.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters