JoVE Logo

Sign In

A subscription to JoVE is required to view this content. Sign in or start your free trial.

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Results
  • Discussion
  • Disclosures
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

A detailed protocol to analyze object selectivity of parieto-frontal neurons involved in visuomotor transformations is presented.

Abstract

Previous studies have shown that neurons in parieto-frontal areas of the macaque brain can be highly selective for real-world objects, disparity-defined curved surfaces, and images of real-world objects (with and without disparity) in a similar manner as described in the ventral visual stream. In addition, parieto-frontal areas are believed to convert visual object information into appropriate motor outputs, such as the pre-shaping of the hand during grasping. To better characterize object selectivity in the cortical network involved in visuomotor transformations, we provide a battery of tests intended to analyze the visual object selectivity of neurons in parieto-frontal regions.

Introduction

Human and non-human primates share the capacity of performing complex motor actions including object grasping. To successfully perform these tasks, our brain needs to complete the transformation of intrinsic object properties into motor commands. This transformation relies on a sophisticated network of dorsal cortical areas located in parietal and ventral premotor cortex1,2,3 (Figure 1).

From lesion studies in monkeys and humans4,5, we know that the dorsal visual stream - originating in primary visual cortex and directed towards posterior parietal cortex - is involved in both spatial vision and the planning of motor actions. However, the majority of dorsal stream areas are not devoted to a unique type of processing. For instance, the anterior intraparietal area (AIP), one of the end stage areas in the dorsal visual stream, contains a variety of neurons that fire not only during grasping6,7,8, but also during the visual inspection of the object7,8,9,10.

Similar to AIP, neurons in area F5, located in the ventral premotor cortex (PMv), also respond during visual fixation and object grasping, which is likely to be important for the transformation of visual information into motor actions11. The anterior portion of this region (subsector F5a) contains neurons responding selectively to three-dimensional (3D, disparity-defined) images12,13, while the subsector located in the convexity (F5c) contains neurons characterized by mirror properties1,3, firing both when an animal performs or observes an action. Finally, the posterior F5 region (F5p) is a hand-related field, with a high proportion of visuomotor neurons responsive to both observation and grasping of 3D objects14,15. Next to F5, area 45B, located in the inferior ramus of the arcuate sulcus, may also be involved in both shape processing16,17 and grasping18.

Testing object selectivity in parietal and frontal cortex is challenging, because it is difficult to determine which features these neurons respond to and what the receptive fields of these neurons are. For example, if a neuron responds to a plate but not to a cone, which feature of these objects is driving this selectivity: the 2D contour, the 3D structure, the orientation in depth, or a combination of many different features? To determine the critical object features for neurons that respond during object fixation and grasping, it is necessary to employ various visual tests using images of objects and reduced versions of the same images.

A sizeable fraction of the neurons in AIP and F5 not only responds to the visual presentation of an object, but also when the animal grasps this object in the dark (i.e., in the absence of visual information). Such neurons may not respond to an image of an object that cannot be grasped. Hence, visual and motor components of the response are intimately connected, which makes it difficult to investigate the neuronal object representation in these regions. Since visuomotor neurons can only be tested with real-world objects, we need a flexible system for presenting different objects at different positions in the visual field and at different orientations if we want to determine which features are important for these neurons. The latter can only be achieved by means of a robot capable of presenting different objects at different locations in visual space.

This article intends to provide an experimental guide for researchers interested in the study of parieto-frontal neurons. In the following sections, we will provide the general protocol used in our laboratory for the analysis of grasping and visual object responses in awake macaque monkeys (Macaca mulatta).

Access restricted. Please log in or start a trial to view this content.

Protocol

All technical procedures were performed in accordance with the National Institute of Health's Guide for the Care and Use of Laboratory Animals and EU Directive 2010/63/EU and were approved by the Ethical Committee of KU Leuven.

1. General Methods for Extracellular Recordings in Awake Behaving Monkeys

  1. Train the animals to perform the visual and motor tasks required to address your specific research question. Ensure that the animal is able to flexibly switch between tasks during the same recording session in order to test the neuron extensively and obtain a better understanding of the features driving the neural response (Figure 2-3). 
    1. Train the animal in Visually-Guided Grasping (VGG; grasping ‘in the light’) to evaluate the visuomotor components of the response. Note: independently from the task chosen, gradually restrict fluid intake at least three days before the start of the training phase. 
      1. Restrain the monkey’s head for the whole duration of the experimental session.
      2. In the first sessions, hold the hand contralateral to the recording chamber at the resting position and help the animal to reach and grasp the object, giving manual reward after each attempt.
      3. Place back the monkey’s hand on the resting position at the end of each trial.
      4. Every few trials, release the hand of the monkey, and wait a few seconds to observe if the animal initiates the movement spontaneously.
      5. Apply manual reward whenever the monkey reaches towards the object.
      6. When the reaching phase is acquired correctly, help the animal to lift (or pull) the object and reward manually.
      7. As in 1.1.1.4 and 1.1.1.5, release the monkey’s hand, and wait a few seconds to observe if the animal initiates the movement spontaneously. Give reward whenever the movement is performed correctly.
      8. Correct the reaching, hand position, and wrist orientation as many times as necessary during the procedure.
      9. Repeat the steps above until the animal performs the sequence automatically.
      10. Load the automatic task. The animal gets rewarded automatically when it performs the reach and grasp movements for a predetermined time.
      11. Gradually increase the holding time of the object.
      12. Introduce the laser that projects the fixation point at the base of the object. Add then the eye tracker to monitor the eye position around the object-to-be-grasped. 
    2. Train the animal in Memory-Guided Grasping (MGG) to investigate the motor component of the response, not affected by the visual component of the stimulus.
      1. Restrain the monkey’s head.
      2. Follow the same steps described for the VGG making sure that the animal maintains fixation on the laser during the task within an electronically defined window. For this version of the task, the light goes off at the end of the fixation period.
    3. Train the monkey in Passive Fixation to address visual responsiveness and shape selectivity. 
      1. Restrain the monkey’s head.
      2. Present the visual stimuli to the monkey using either a CRT (Passive fixation of 3D stimuli) or an LCD monitor (Passive Fixation of 2D stimuli).
      3. Present a fixation spot at the center of the screen, superimposed on the visual stimuli.
      4. Reward the animal after each presentation of the stimulus and increase gradually the fixation period until reaching the standards of the task.
  2. Perform Surgery, using sterile tools, drapes and gowns.
    1. Anesthetize the animal with ketamine (15 mg/kg, intramuscularly) and medetomidine hydrochloride (0.01-0.04 mL/kg intramuscularly) and confirm the anesthesia regularly by checking the animal’s response to stimuli, heart rate, respiration rate and blood pressure. 
    2. Maintain general anesthesia (propofol 10 mg/kg/h intravenously) and administer oxygen with a tracheal tube. Use a lanolim-based ointment to prevent eye dryness while under anesthesia.
    3. Provide analgesia using 0.5 cc of Buprenorphine (0.3mg/ml intravenously). In case of increase of heart rate during the surgery, an extra dosage can be administered. 
    4. Implant an MRI compatible head post with ceramic screws and dental acrylic. Perform all survival surgeries under strict aseptic conditions. For an adequate maintenance of the sterile field, use disposable sterile gloves, masks and sterile instruments. 
    5. Guided by anatomical magnetic resonance imaging (MRI; Horsley-Clark coordinates), make a craniotomy above the area of interest and implant the recording chamber on the monkey’s skull. Use a standard recording chamber for single unit extracellular recordings or a multielectrode microdrive, for the simultaneous recording of multiple neurons.
    6. After the surgery, discontinue the intravenous administration of propofol until spontaneous breathing resumes. Do not leave the animal unattended until it has regained consciousness and introduce the animal in the social group only after complete recovery.
    7. Provide post-operative analgesia as recommended by the institutional veterinarian; use for example Meloxicam (5mg/ml intramuscularly).
    8. Wait 6 weeks after the surgery before starting the experiment. This allows a better anchorage of the head post to the skull and guarantees that the animal has fully recovered from the intervention.
  3. Localize the recording area using MRI (for single unit extracellular recordings) and computed tomography (CT; for multielectrode recordings).
    1. Fill glass capillaries with a 2% copper sulfate solution and insert them into a recording grid.
    2. Perform structural MRI (slice thickness: 0.6 mm). 
  4. Monitoring of  neural activity.
    1. Use tungsten microelectrodes with an impedance of 0.8 – 1 MΩ.
    2. Insert the electrode through the dura using a 23G stainless steel guide tube and a hydraulic microdrive. 
    3. For spike discrimination, amplify and filter the neural activity between 300 and 5,000 Hz. 
    4. For local field potential (LFP) recordings, amplify and filter the signal between 1 and 170 Hz.
  5. Monitor the eye signal
    1. Adjust an infrared camera in front of the animal’s eyes to obtain an adequate image of the pupil and of the corneal reflex. 
    2. Use an infrared-based camera to sample the pupil position at 500 Hz.

2. Investigating Object Selectivity in Dorsal Areas 

  1. Perform Visually-Guided Grasping (VGG).
    1. Choose the right grasping setup depending on the goal of research: carousel setup or robot setup (Figure 3). 
    2. For the carousel setup, run the VGG task:
      1. Let the monkey place the hand contralateral to the recorded hemisphere in the resting position in complete darkness to initiate the sequence. 
      2. After a variable time (intertrial interval: 2,000-3,000 ms), apply a red laser (fixation point) at the base of the object (distance: 28 cm from the monkeys’ eyes). If the animal maintains its gaze inside an electronically-defined fixation window (+/- 2.5°) for 500 ms, illuminate the object from above with a light source. 
      3. After a variable delay (300-1500 ms), program a dimming of the laser (visual GO cue) instructing the monkey to lift the hand from the resting position, and reach, grasp and hold the object for a variable interval (holding time: 300-900 ms). 
      4. Whenever the animal performs the whole sequence correctly, reward it with a drop of juice.
    3. Use a similar task sequence for the robot setup.
      1. As for the carousel setup, let the monkey place the hand contralateral to the recorded hemisphere in the resting position in complete darkness to initiate the sequence. 
      2. After a variable time (intertrial interval: 2,000-3,000 ms), illuminate the LED (fixation point) on the object (from within; distance: 28 cm from the monkeys’ eyes). Again, if the animal maintains its gaze inside an electronically-defined fixation window (+/- 2.5°) for 500 ms, illuminate the object from within with a white light source.
      3. After a variable delay (300-1500 ms), switch off the LED (visual GO cue), instructing the monkey to lift the hand from the resting position, and reach, grasp and hold the object for a variable interval (holding time: 300-900 ms). 
      4. Whenever the animal performs the whole sequence correctly, reward it with a drop of juice.
    4. During the task, quantify the performance of the monkey, paying special attention to the timing. Measure both the time elapsed between the go-signal and the onset of the hand movement (reaction time), and between the start of the movement and the lift of the object (grasping time).
  2. Perform Memory-Guided Grasping (MGG; ‘Grasping in the dark’). Use the MGG task to determine if neurons are visuomotor or motor-dominant. 
    ​Note: The sequence is similar to that described for the VGG, but the object is grasped in total darkness.
    1. Identical to the VGG task, let the monkey place the hand contralateral to the recorded hemisphere in the resting position in complete darkness to initiate the sequence. 
    2. After a variable time (intertrial interval: 2,000-3,000 ms), apply a red laser/LED (fixation point) to indicate the fixation point (at the base of the object for the carousel setup, at the center of the object for the robot setup; distance: 28 cm from the monkeys’ eyes). If the animal maintains its gaze inside an electronically-defined fixation window (+/- 2.5°) for 500 ms, illuminate the object.
    3. After a fixed time (400 ms), switch off the light. 
    4. After a variable delay period (300-1500 ms) following  light offset, dim/switch off the fixation point (GO CUE) to instruct the monkey to lift the hand and reach, grasp, and hold the object (holding time: 300-900 ms). 
    5. Whenever the animal performs the whole sequence correctly, give a drop of juice as a reward. 
  3. Perform Passive fixation. As for the VGG task, choose the most appropriate setup (carousel or robot setup) depending on the goal of the research. 
    ​Note: Two different passive fixation tasks can be performed: passive fixation of real-world objects (using the objects-to-be-grasped in the carousel and robot setups) and passive fixation of 3D/2D images of objects.
    1. Perform passive fixation of real-world objects. 
      1. Present the fixation point (red laser for the carousel setup projected at the base of the object and red LED in the robot setup).
      2. If the animal maintains its gaze inside an electronically-defined fixation window (+/- 2.5°) for 500 ms, illuminate the object for 2,000 ms. 
      3. If the animal maintains its gaze within the window for 1,000 ms, reward it with a drop of juice. 
    2. Perform passive fixation of 3D/2D images of objects.
      1. Present all visual stimuli on a black background (luminance of 8 cd/m2) using a monitor (resolution of 1,280 × 1,024 pixels) equipped with a fast-decay P46-phosphor and operated at 120 Hz (viewing distance: 86 cm).
      2. In the 3D tests, present the stimuli stereoscopically by alternating the left and right eye images on a display (CRT monitor), in combination with two ferroelectric liquid crystal shutters. Locate these shutters in front of the monkey’s eyes, operate at 60 Hz and synchronize to the vertical retrace of the monitor. 
      3. Start the trial by presenting a small square in the center of the screen (fixation point; 0.2° × 0.2°). If the eye position remains within an electronically defined 1° square window (much smaller than for real-world objects) for at least 500 ms, present the visual stimulus on the screen, for a total time of 500 ms. 
      4. When the monkey maintains a stable fixation until the stimulus offset, reward it with a drop of juice. 
      5. For an adequate study of shape selectivity, run a comprehensive battery of tests with 2D images during passive fixation task, in the following sequence.
      6. Run a Search test. Test the visual selectivity of the cell using a wide set of images (surface images; Figure 4A), including the pictures of the object that is grasped in the VGG. For this and all subsequent visual tasks, compare the image evoking the strongest response (termed ‘preferred image’) to a second image to which the neuron is responding weakly (termed ‘nonpreferred image’). If the neuron under study responds also to the images of objects, search for specific stimulus components driving the cell’s responsiveness (Contour test, Receptive Field test and Reduction test).
      7. Run a Contour test. From the original surface images of real objects (2D or 3D images containing texture, shading and perspective), obtain progressively simplified versions of the same stimulus shape (silhouettes and outlines; Figure 4B). Collect at least 10 trials per condition in order to determine whether the neuron prefers the original surface, the silhouette or the outline from the original shape.
      8. Run a Receptive Field (RF) test. To map the RF of a neuron, present the images of objects at different positions on a display (in this experiment, 35 positions; stimulus size of 3°), covering the central visual field19,20. To collect enough stimulus repetitions at all possible positions in a reasonable time, reduce the stimulus duration (flashed stimuli; stimulus duration: 300 ms, intertrial interval: 300 ms).
      9. Run a Reduction test. Run a Reduction test with contour fragments presented at the center of the RF to identify the Minimum Effective Shape Feature (MESF). Generate the set of stimuli in Photoshop by cropping the contour of each of the original contour shapes along the main axes (Figure 3B). Design the MESF as the smallest shape fragment evoking a response that is at least 70% of the intact outline response and not significantly smaller than that response8
      10. For a better estimation of position dependency (the effect of stimulus position on fragment selectivity), run two different tests. Run a Reduction Test with the fragments located at the position occupied in the original outline shape. Run a Reduction Test with the fragments at the center of mass of the shape. 
      11. At this stage, run a new RF mapping using the MESF.

Access restricted. Please log in or start a trial to view this content.

Results

Figure 5 plots the responses of an example neuron recorded from area F5p tested with four objects: two different shapes -a sphere and a plate- shown in two different sizes (6 and 3 cm). This particular neuron responded not only to the large sphere (optimal stimulus; upper left panel), but also to the large plate (lower left panel). In comparison, the response to the smaller objects was weaker (upper and lower right panels).

Access restricted. Please log in or start a trial to view this content.

Discussion

A comprehensive approach to the study of the dorsal stream requires a careful selection of behavioral tasks and visual tests: visual and grasping paradigms can be employed either combined or separately depending on the specific properties of the region.

In this article, we provide the examples of the neural activity recorded in both AIP and F5p in response to a subset of visual and motor tasks, but very similar responses can be observed in other frontal areas such as area 45B and F5a.

Access restricted. Please log in or start a trial to view this content.

Disclosures

The authors have nothing to disclose.

Acknowledgements

We thank Inez Puttemans, Marc De Paep, Sara De Pril, Wouter Depuydt, Astrid Hermans, Piet Kayenbergh, Gerrit Meulemans, Christophe Ulens, and Stijn Verstraeten for technical and administrative assistance.

Access restricted. Please log in or start a trial to view this content.

Materials

NameCompanyCatalog NumberComments
Grasping robotGIBAS Universal RobotsUR-6-85-5-ARobot arm equipped with a gripper
Carousel motorSiboniRD066/†20 MV6, 35x23 F02Motor to be implemented in a custom-made vertical carousel. It allows the rotation of the carousel.
Eye trackerSR ResearchEyeLink IIInfrared camera system sampling at 500 Hz
FilterWavetek Rockland852Electronic filters perform a variety of signal-processing functions with the purpose of removing a signal's unwanted frequency components.
PreamplifierBAK ELECTRONICS, INC.A-1The Model A-1 allows to reduce input capacity and noise pickup and allows to test impedance for metal micro-electrodes
ElectrodesFHCUEWLEESE*N4GMetal microelectrodes (* = Impedance, to be chosen by the researcher)
CRT monitorVision Research GraphicsM21L-67S01The CRT monitor is equipped with a fast-decay P46-phosphor operating at 120 Hz
Ferroelectric liquid crystal shuttersDisplay TechFLC Shutter Panel; LV2500P-OEMThe shutters operate at 60 Hz in front of the monkeys and are synchronized to the vertical retrace of the monitor

References

  1. Gallese, V., Fadiga, L., Fogassi, L., Rizzolatti, G. Action recognition in the premotor cortex. Brain. 119 (2), 593-609 (1996).
  2. Fogassi, L., Gallese, V., Buccino, G., Craighero, L., Fadiga, L., Rizzolatti, G. Cortical mechanism for the visual guidance of hand grasping movements in the monkey: a reversible inactivation study. Brain. 124 (3), 571-586 (2001).
  3. Rizzolatti, G., Camarda, R., Fogassi, L., Gentilucci, M., Luppino, G., Matelli, M. Functional organization of inferior area 6 in the macaque monkey. II. Area F5 and the control of distal movements. Exp. Brain Res. 71 (3), 491-507 (1988).
  4. Mishkin, M., Ungerleider, L. G. Contribution of striate inputs to the visuospatial functions of parieto-preoccipital cortex in monkeys. Behav. Brain Res. 6 (1), 57-77 (1982).
  5. Goodale, M. A., Milner, A. D. Separate visual pathways for perception and action. Trends Neurosci. 15 (1), 20-25 (1992).
  6. Baumann, M. A., Fluet, M. C., Scherberger, H. Context-specific grasp movement representation in the macaque anterior intraparietal area. J. Neurosci. 29 (20), 6436-6438 (2009).
  7. Murata, A., Gallese, V., Luppino, G., Kaseda, M., Sakata, H. Selectivity for the shape, size, and orientation of objects for grasping neurons of monkey parietal area AIP. J. Neurophysiol. 83 (5), 2580-2601 (2000).
  8. Romero, M. C., Pani, P., Janssen, P. Coding of shape features in the macaque anterior intraparietal area. J. Neurosci. 34 (11), 4006-4021 (2014).
  9. Sakata, H., Taira, M., Kusonoki, M., Murata, A., Tanaka, Y., Tsutsui, K. Neural coding of 3D features of objects for hand action in the parietal cortex of the monkey. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 353 (1373), 1363-1373 (1998).
  10. Taira, M., Mine, S., Georgopoulos, A. P., Murata, A., Sakata, H. Parietal cortex neurons of the monkey related to the visual guidance of the hand movement. Exp Brain Res. 83 (1), 29-36 (1990).
  11. Janssen, P., Scherberger, H. Visual guidance in control of grasping. Annu. Rev. Neurosci. 8 (38), 69-86 (2015).
  12. Theys, T., Pani, P., van Loon, J., Goffin, J., Janssen, P. Selectivity for three-dimensional contours and surfaces in the anterior intraparietal area. J. Neurophysiol. 107 (3), 995-1008 (2012).
  13. Goffin, J., Janssen, P. Three-dimensional shape coding in grasping circuits: a comparison between the anterior intraparietal area and ventral premotor area F5a. J. Cogn. Neurosci. 25 (3), 352-364 (2013).
  14. Raos, V., Umiltá, M. A., Murata, A., Fogassi, L., Gallese, V. Functional properties of grasping-related neurons in the ventral premotor area F5 of the macaque monkey. J. Neurophysiol. 95 (2), 709-729 (2006).
  15. Umilta, M. A., Brochier, T., Spinks, R. L., Lemon, R. N. Simultaneous recording of macaque premotor and primary motor cortex neuronal populations reveals different functional contributions to visuomotor grasp. J. Neurophysiol. 98 (1), 488-501 (2007).
  16. Denys, K., et al. The processing of visual shape in the cerebral cortex of human and nonhuman primates: a functional magnetic resonance imaging study. J. Neurosci. 24 (10), 2551-2565 (2004).
  17. Theys, T., Pani, P., van Loon, J., Goffin, J., Janssen, P. Selectivity for three-dimensional shape and grasping-related activity in the macaque ventral premotor cortex. J.Neurosci. 32 (35), 12038-12050 (2012).
  18. Nelissen, K., Luppino, G., Vanduffel, W., Rizzolatti, G., Orban, G. A. Observing others: multiple action representation in the frontal lobe. Science. 310 (5746), 332-336 (2005).
  19. Janssen, P., Srivastava, S., Ombelet, S., Orban, G. A. Coding of shape and position in macaque lateral intraparietal area. J. Neurosci. 28 (26), 6679-6690 (2008).
  20. Romero, M. C., Janssen, P. Receptive field properties of neurons in the macaque anterior intraparietal area. J. Neurophysiol. 115 (3), 1542-1555 (2016).
  21. Decramer, T., Premereur, E., Theys, T., Janssen, P. Multi-electrode recordings in the macaque frontal cortex reveal common processing of eye-, arm- and hand movements. Program No. 495.15/GG14. Neuroscience Meeting Planner. , Washington DC: Society for Neuroscience. Online (2017).
  22. Pani, P., Theys, T., Romero, M. C., Janssen, P. Grasping execution and grasping observation activity of single neurons in macaque anterior intraparietal area. J. Cogn. Neurosci. 26 (10), 2342-2355 (2014).
  23. Turriziani, P., Smirni, D., Oliveri, M., Semenza, C., Cipolotti, L. The role of the prefrontal cortex in familiarity and recollection processes during verbal and non-verbal recognition memory. Neuroimage. 52 (1), 469-480 (2008).
  24. Tsao, D. Y., Schweers, N., Moeller, S., Freiwald, W. A. Patches of faces-selective cortex in the macaque frontal lobe. Nat. Neurosci. 11 (8), 877-879 (2008).

Access restricted. Please log in or start a trial to view this content.

Reprints and Permissions

Request permission to reuse the text or figures of this JoVE article

Request Permission

Explore More Articles

Object RepresentationsSingle unit RecordingsVisual SystemMotor SystemVisually guided GraspingReachingMacaqueNeuroscienceNeural RepresentationsObject EncodingObject PartsReaction TimeGrasping TaskCarousel SetupVisual Go Cue

This article has been published

Video Coming Soon

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright © 2025 MyJoVE Corporation. All rights reserved