Frontiers Commentary ARTICLE
Front. Neurosci., 15 September 2009 | https://doi.org/10.3389/neuro.01.021.2009
A primer of visual stimulus presentation software
Section Neurophysiology and Neuroinformatics (26), Centre for Neuroscience, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands
A commentary on
Vision Egg: an open-source library for realtime visual stimulus generation
by Andrew D. Straw
The visual system has been the most widely studied sensory system in neuroscience during the last decades. A reliable and flexible visual stimulus presentation tool is one of the most important prerequisites for a thorough analysis of its sensory processing characteristics. While almost all sensory systems labs have created some home-grown solutions, these are not easily transferable from one lab to another or from one presentation platform to another. In addition, several stimuli are hard to generate with the desired accuracy in timing, color and luminance, 3D rendering or stereopsis.
Vision Egg (Straw, 2008 ) is a more widely used software library, designed originally to probe the visual system of the fly. It is an open source and platform-independent software package built on top of Python (as the programming language) and OpenGL (for graphics instructions). For a well versed programmer, Vision Egg achieves its goals very well, providing a powerful and highly optimized system for visual stimulus presentation and interactions with hardware – including the ability to run experiments remotely across a network (via TCP/IP). While historically the Vision Egg software strongly adheres to an object-oriented model of programming this can be hard to understand for relatively inexperienced programmers. For instance, the temporal control of experiments in Vision Egg is predominantly through the use of presentation loops, whereby the user sets an object to run for a given length of time, attaches stimuli to it, assigns it to a screen, and then tells the object to “go”. This “mainloop-and-callback” mechanism of flow control has advantages where stimuli continue to run between trials. The alternative, however, of an explicit sequence of control statements can also be implemented (see Figure 2 of Straw, 2008 ).
Table 1 (adapted from Peirce, 2007 ) gives a comparison of various features of four well known stimulus presentation programs. Two of these (Vision Egg and PsychoPy) have very similar philosophies, are both implemented in Python, and differed originally in their low-latency real-time capabilities. The most substantive differences between them today are that Vision Egg offers relatively simple perspective corrected stimuli utilizing the 3D nature of OpenGL, while PsychoPy has an automated luminance calibration utility and interfaces more easily with certain types of hardware. Furthermore, the primary development platform of the Vision Egg is GNU/Linux, while it appears to be Windows for PsychoPy.
Another interesting issue discussed shortly in Straw’s paper is the feasibility of setting up a stimulus library in form of a database that could be downloaded and used with different presentation environments. As everyone who has developed databases knows, there is more involved in such a project than just storing bitmaps (or sequences thereof) of a standard number of pixels. For example, the issues of frame rate, display luminance and position calibration, and synchronization with data acquisition and other hardware would all need to be addressed. Even further, the creation of a universal language for specifying sensory stimuli would be of great interest.
Altogether this paper by Straw on the Vision Egg gives a fairly technical account of many relevant hardware and software considerations, but is nevertheless a well readable primer of considerations when deciding on what visual stimulus software to choose or extend.
This work was supported by the Initiative and Networking Fund of the Helmholtz Association within the Helmholtz Alliance on Systems Biology.