Friday, August 6, 2010
In this tutorial we introduce basic concepts behind the Visualization Toolkit (VTK). An overview of the system, plus some detailed examples, will assist you in learning this system. The tutorial targets researchers of any discipline who have 2D or 3D data and want more control over the visualization process than a turn-key system can provide. It also assists developers who would like to incorporate VTK into an application as a visualization or data processing engine. Although this tutorial can only provide an introduction to this extensive toolkit, we’ve provided references to additional material.
What is VTK?
VTK1 is an open-source (see the sidebar “Open Source Breakout”), portable (WinTel/Unix), object-oriented software system for 3D computer graphics, visualization,
and image processing. Implemented in C++, VTK also supports Tcl, Python, and Java language bindings,
permitting complex applications, rapid application prototyping, and simple scripts. Although VTK
doesn’t provide any user interface components, it can be integrated with existing widget sets such as Tk or
VTK provides a variety of data representations including unorganized point sets, polygonal data, images, volumes, and structured, rectilinear, and unstructured grids. VTK comes with readers/importers and writers/
exporters to exchange data with other applications. Hundreds of data processing filters are available to operate on these data, ranging from image convolution to Delaunay triangulation. VTK’s rendering model supports 2D, polygonal, volumetric, and texture-based approaches that can be used in any combination.
VTK is one of several visualization systems available today. AVS2 was one of the first commercial systems
available. IBM’s Data Explorer (DX),3 originally a commercial product, is now open source and known as
OpenDX. NAG Explorer4 and Template Graphics Amira (see http://www.tgs.com/Amira/index.html) are other well-known commercial systems.
VTK is a general-purpose system used in a variety of applications, as seen in Figure 1. Because VTK is open source, faculty at many universities—including Rensselaer Polytechnic Institute, State University of New York at Stony Brook, the Ohio State University, Stanford, and Brigham and Women’s Hospital use VTK to teach courses and as a research tool. National labs such as Los Alamos are adapting VTK to large-scale parallel processing.
Commercial firms are building proprietary applications on top of the open-source foundation, including medical visualization, volume visualization, oil exploration, acoustics, fluid mechanics, finite element analysis, and surface reconstruction from laser-digitized, unorganized point-clouds.
VTK began in December 1993 as companion software to the text The Visualization Toolkit: An Object-Oriented Approach to 3D Graphics by Will Schroeder, Ken Martin, and Bill Lorensen (Prentice Hall). In 1998 the second edition of the text appeared, with additional authors Lisa Avila, Rick Avila, and Charles Law. Since that time a sizable community has grown up around the software, including dozens of others as developers, often submitting bug fixes or full-blown class implementations.
These community efforts have helped the software evolve. For example, David Gobbi in the Imaging
Research Laboratories at the John P. Robarts Research Institute, University of Western Ontario, has reworked
VTK’s transformation classes and is now an active developer.
VTK consists of two major pieces: a compiled core (implemented in C++) and an automatically generated
interpreted layer. The interpreted layer currently supports Tcl, Java, and Python. C++ core Data structures, algorithms, and time-critical system functions are implemented in the C++ core. Common design patterns such as object factories and virtual functions insure portability and extensibility. Since VTK is independent of any graphical user interface (GUI), it doesn’t depend on the windowing system. Hooks into the window ID and event loop let developers plug VTK into their own applications. An abstract graphics model (described in the next section) achieves graphics portability.
Open Source Breakout
A model of software development called open source is gaining acceptance in the software world. Although the exact definition of open source remains debatable, the basic premise is that the source code is freely available to anyone who wants it. This differs greatly from commercial software, freeware, and shareware, all of which are normally distributed in a binary format only. The availability of source code to a wide audience creates many opportunities and advantages in the software development process. Recently, several high-profile projects have brought this model to the attention of the media and general public. Those projects include the Linux operating system, the Apache Web server (running 50 percent of the World Wide Web), and sendmail (the backbone for much of the e-mail sent today). Although people have shared source code since the beginning of computers, new business models, software development tools, and the Internet have allowed the practice to expand greatly in the past five years. Open-source software has many benefits. Eric Raymond in The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary (O’Reilly Publishers) argues that opensource software development (the bazaar model) is inherently more scalable than closed-team development (the cathedral model). With more eyes looking at source code, bugs can be discovered and fixed faster. In addition, new developers join the development team at no extra cost. This has created more reliable and portable software with faster development cycles than many closed commercial offerings. With many developers in diverse geographical areas, testing becomes even more important.
In the past 10 years business models have emerged to support open-source development. It may seem impossible for a company to survive by giving away software. However, companies can thrive around an open-source project. Some common ways of generating revenue include consulting, training, adding features, selling technical support, building proprietary end-user applications on top of open-source libraries, and selling development tools.