JoVE Logo

Zaloguj się

Aby wyświetlić tę treść, wymagana jest subskrypcja JoVE. Zaloguj się lub rozpocznij bezpłatny okres próbny.

W tym Artykule

  • Podsumowanie
  • Streszczenie
  • Wprowadzenie
  • Protokół
  • Wyniki
  • Dyskusje
  • Ujawnienia
  • Podziękowania
  • Materiały
  • Odniesienia
  • Przedruki i uprawnienia

Podsumowanie

Here, we present a simplified open-source hardware and software setup for investigating mouse spatial learning using virtual reality (VR). This system displays a virtual linear track to a head-restrained mouse running on a wheel by utilizing a network of microcontrollers and a single-board computer running an easy-to-use Python graphical software package.

Streszczenie

Head-restrained behavioral experiments in mice allow neuroscientists to observe neural circuit activity with high-resolution electrophysiological and optical imaging tools while delivering precise sensory stimuli to a behaving animal. Recently, human and rodent studies using virtual reality (VR) environments have shown VR to be an important tool for uncovering the neural mechanisms underlying spatial learning in the hippocampus and cortex, due to the extremely precise control over parameters such as spatial and contextual cues. Setting up virtual environments for rodent spatial behaviors can, however, be costly and require an extensive background in engineering and computer programming. Here, we present a simple yet powerful system based upon inexpensive, modular, open-source hardware and software that enables researchers to study spatial learning in head-restrained mice using a VR environment. This system uses coupled microcontrollers to measure locomotion and deliver behavioral stimuli while head-restrained mice run on a wheel in concert with a virtual linear track environment rendered by a graphical software package running on a single-board computer. The emphasis on distributed processing allows researchers to design flexible, modular systems to elicit and measure complex spatial behaviors in mice in order to determine the connection between neural circuit activity and spatial learning in the mammalian brain.

Wprowadzenie

Spatial navigation is an ethologically important behavior by which animals encode the features of new locations into a cognitive map, which is used for finding areas of possible reward and avoiding areas of potential danger. Inextricably linked with memory, the cognitive processes underlying spatial navigation share a neural substrate in the hippocampus1 and cortex, where neural circuits in these areas integrate incoming information and form cognitive maps of environments and events for later recall2. While the discovery of place cells in the hippocampus3,4 and grid cells in the entorhinal cortex5 has shed light on how the cognitive map within the hippocampus is formed, many questions remain about how specific neural subtypes, microcircuits, and individual subregions of the hippocampus (the dentate gyrus, and cornu ammonis areas, CA3-1) interact and participate in spatial memory formation and recall.

In vivo two-photon imaging has been a useful tool in uncovering cellular and population dynamics in sensory neurophysiology6,7; however, the typical necessity for head restraint limits the utility of this method for examining mammalian spatial behavior. The advent of virtual reality (VR)8 has addressed this shortcoming by presenting immersive and realistic visuospatial environments while head-restrained mice run on a ball or treadmill to study spatial and contextual encoding in the hippocampus8,9,10 and cortex11. Furthermore, the use of VR environments with behaving mice has allowed neuroscience researchers to dissect the components of spatial behavior by precisely controlling the elements of the VR environment12 (e.g., visual flow, contextual modulation) in ways not possible in real-world experiments of spatial learning, such as the Morris water maze, Barnes maze, or hole board tasks.

Visual VR environments are typically rendered on the graphical processing unit (GPU) of a computer, which handles the load of rapidly computing the thousands of polygons necessary to model a moving 3D environment on a screen in real time. The large processing requirements generally require the use of a separate PC with a GPU that renders the visual environment to a monitor, multiple screens13, or a projector14 as the movement is recorded from a treadmill, wheel, or foam ball under the animal. The resulting apparatus for controlling, rendering, and projecting the VR environment is, therefore, relatively expensive, bulky, and cumbersome. Furthermore, many such environments in the literature have been implemented using proprietary software that is both costly and can only be run on a dedicated PC.

For these reasons, we have designed an open-source VR system to study spatial learning behaviors in head-restrained mice using a Raspberry Pi single-board computer. This Linux computer is both small and inexpensive yet contains a GPU chip for 3D rendering, allowing the integration of VR environments with the display or behavioral apparatus in varied individual setups. Furthermore, we have developed a graphical software package written in Python, "HallPassVR", which utilizes the single-board computer to render a simple visuospatial environment, a virtual linear track or hallway, by recombining custom visual features selected using a graphical user interface (GUI). This is combined with microcontroller subsystems (e.g., ESP32 or Arduino) to measure locomotion and coordinate behavior, such as by the delivery of other modalities of sensory stimuli or rewards to facilitate reinforcement learning. This system provides an inexpensive, flexible, and easy-to-use alternative method for delivering visuospatial VR environments to head-restrained mice during two-photon imaging (or other techniques requiring head fixation) for studying the neural circuits underlying spatial learning behavior.

Protokół

All procedures in this protocol were approved by the Institutional Animal Care and Use Committee of the New York State Psychiatric Institute.

NOTE: A single-board computer is used to display a VR visual environment coordinated with the running of a head-restrained mouse on a wheel. Movement information is received as serial input from an ESP32 microcontroller reading a rotary encoder coupled to the wheel axle. The VR environment is rendered using OpenGL hardware acceleration on the Raspberry Pi GPU, which utilizes the pi3d Python 3D package for Raspberry Pi. The rendered environment is then output via a projector onto a compact wraparound parabolic screen centered on the head-restrained mouse's visual field15,16, while the behavior (e.g., licking in response to spatial rewards) is measured by a second behavior ESP32 microcontroller. The graphical software package enables the creation of virtual linear track environments consisting of repeated patterns of visual stimuli along a virtual corridor or hallway with a graphical user interface (GUI). This design is easily parameterized, thus allowing the creation of complex experiments aimed at understanding how the brain encodes places and visual cues during spatial learning (see section 4). Designs for the custom hardware components necessary for this system (i.e., the running wheel, projection screen, and head-restraint apparatus) are deposited in a public GitHub repository (https://github.com/GergelyTuri/HallPassVR). It is recommended to read the documentation of that repository along with this protocol, as the site will be updated with future enhancements of the system.

1. Hardware setup: Construction of the running wheel, projection screen, and head-fixation apparatus

NOTE: The custom components for these setups can be easily manufactured if the user has access to 3D-printing and laser-cutting equipment or may be outsourced to professional manufacturing or 3D prototyping services (e.g., eMachinehop). All the design files are provided as .STL 3D files or .DXF AutoCAD files.

  1. Running wheel and behavioral setup (Figure 1)
    NOTE: The wheel consists of a clear acrylic cylinder (6 in diameter, 3 in width, 1/8 in thickness) centered on an axle suspended from laser-cut acrylic mounts via ball bearings. The wheel assembly is then mounted to a lightweight aluminum frame (t-slotted) and securely fastened to an optical breadboard (Figure 1C-E).
    1. Laser-cut the sides of the wheel and axle mounts from a 1/4 in acrylic sheet, and attach the wheel sides to the acrylic cylinder with acrylic cement. Screw the axle flange into the center of the wheel side piece.
    2. Insert the axle into the wheel center flange, snap the ball bearings into the axle mounts, and attach them to the vertical aluminum support bar.
    3. Insert the wheel axle into the mounted ball bearings, leaving 0.5-1 inch of the axle past the bearings for the attachment of the rotary encoder.
    4. Attach the rotary encoder mount to the end of the axle opposite the wheel, and insert the rotary encoder; then, use the shaft coupler to couple the wheel axle to the rotary encoder shaft.
    5. Attach the lick port to the flex arm, and then affix to the aluminum wheel frame with t-slot nuts. Use 1/16 inch tubing to connect the lick port to the solenoid valve and the valve to the water reservoir.
      NOTE: The lick port must be made of metal with a wire soldered to attach it to the capacitive touch sensing pins of the behavior ESP32.
  2. Projection screen
    NOTE: The VR screen is a small parabolic rear-projection screen (canvas size: 54 cm x 21.5 cm) based on a design developed in Christopher Harvey's laboratory15,16. The projection angle (keystone) of the LED projector used is different from that of the laser projector used previously; thus, the original design is slightly modified by mounting the unit under the screen and simplifying the mirror system (Figure 1A, B). Reading the Harvey lab's documentation along with ours is highly recommended to tailor the VR environment to the user's needs15.
    1. Laser-cut the projection screen sides from 1/4 in black matte acrylic sheets. Laser-cut the back projection mirror from 1/4 in mirrored acrylic.
    2. Assemble the projection screen frame with the aluminum bars, and laser-cut the acrylic panels.
    3. Insert the translucent projection screen material into the parabolic slot in the frame. Insert the rear projection mirror into the slot in the back of the projection screen frame.
    4. Place an LED projector on the bottom mounting plate inside the projection screen frame. Align the projector with mounting bolts to optimize the positioning of the projected image on the parabolic rear projection screen.
    5. Seal the projector box unit to prevent light contamination of the optical sensors if necessary.
  3. Head-restraint apparatus
    NOTE: This head-restraint apparatus design consists of two interlocking 3D-printed manifolds for securing a metal head post (Figure 1E, F).
    1. Using a high-resolution SLM 3D printer, 3D print the head post holding arms.
      NOTE: Resin-printed plastic is able to provide stable head fixation for behavior experiments; however, to achieve maximum stability for sensitive applications like single-cell recording or two-photon imaging, it is recommended to use machined metal parts (e.g., eMachineShop).
    2. Install the 3D-printed head post holder onto a dual-axis goniometer with optical mounting posts so that the animal's head can be tilted to level the preparation.
      NOTE: This feature is indispensable for long-term in vivo imaging experiments when finding the same cell population in subsequent imaging sessions is required. Otherwise this feature can be omitted to reduce the cost of the setup.
    3. Fabricate the head posts.
      NOTE: Two types of head posts with different complexity (and price) are deposited in the link provided in the Table of Materials along with these instructions.
      1. Depending on the experiment type, decide which head post to implement. The head bars are made of stainless steel and are generally outsourced to any local machine shop or online service (e.g., eMachineShop) for manufacturing.

2. Setup of the electronics hardware/software (single board computer, ESP32 microcontrollers, Figure 2)

  1. Configure the single-board computer.
    NOTE: The single-board computer included in the Table of Materials (Raspberry Pi 4B) is optimal for this setup because it has an onboard GPU to facilitate VR environment rendering and two HDMI ports for experiment control/monitoring and VR projection. Other single-board computers with these characteristics may potentially be substituted, but some of the following instructions may be specific to Raspberry Pi.
    1. Download the single-board computer imager application to the PC, and install the OS (currently Raspberry Pi OS r.2021-05-07) on the microSD card (16+ GB). Insert the card, and boot the single-board computer.
    2. Configure the single-board computer for the pi3d Python 3D library: (menu bar) Preferences > Raspberry Pi Configuration.
      1. Click on Display > Screen Blanking > Disable.
      2. Click on Interfaces > Serial Port > Enable.
      3. Click on Performance > GPU Memory > 256 (MB).
    3. Upgrade the Python image library package for pi3d: (terminal)> sudo pip3 install pillow --upgrade.
    4. Install the pi3d Python 3D package for the single board computer: (terminal)> sudo pip3 install pi3d.
    5. Increase the HDMI output level for the projector: (terminal)> sudo nano /boot/config.txt, uncomment config_hdmi_boost=4, save, and reboot.
    6. Download and install the integrated development environment (IDE) from arduino.cc/en/software (e.g., arduino-1.8.19-linuxarm.tar.gz), which is needed to load the code onto the rotary encoder and the behavior ESP32 microcontrollers.
    7. Install ESP32 microcontroller support on the IDE:
      1. Click on File > Preferences > Additional Board Manager URLs = https://raw.githubusercontent.com/espressif/arduino-esp32/gh-pages/package_esp32_index.json
      2. Click on Tools > Boards > Boards Manager > ESP32 (by Espressif). Install v.2.0.0 (upload currently fails on v2.0.4).
    8. Download and install the Processing IDE from https://github.com/processing/processing4/releases (e.g., processing-4.0.1-linux-arm32.tgz), which is necessary for the recording and online plotting of the mouse behavior during VR.
      NOTE: The Arduino and Processing environments may be run on a separate PC from the VR single-board computer if desired.
  2. Set up the rotary encoder ESP32 connections.
    NOTE: The rotary encoder coupled to the wheel axle measures the wheel rotation with mouse locomotion, which is counted with an ESP32 microcontroller. The position changes are then sent to the single-board computer GPIO serial port to control the movement through the virtual environment using the graphical software package, as well as to the behavior ESP32 to control the reward zones (Figure 2).
    1. Connect the wires between the rotary encoder component and the rotary ESP32. Rotary encoders generally have four wires: +, GND, A and B (two digital lines for quadrature encoders). Connect these via jumper wires to ESP32 3.3 V, GND, 25, 26 (in the case of the attached code).
    2. Connect the serial RX/TX wires between the rotary ESP32 and the behavior ESP32. Make a simple two-wire connection between the rotary ESP32 Serial0 RX/TX (receive/transmit) and the Serial2 port of the behavior ESP32 (TX/RX, pins 17, 16; see Serial2 port on the right of OMwSmall PCB). This will carry movement information from the rotary encoder to the behavior setup for spatial zones such as reward zones.
    3. Connect the serial RX/TX wires between the rotary ESP32 and the single-board computer GPIO (or direct USB connection). Make a two-wire connection between the single-board computer GPIO pins 14, 15 (RX/TX) and the rotary ESP32 Serial2 (TX/RX, pins 17, 16). This will carry movement information from the rotary encoder to the graphical software package running on the single-board computer.
      NOTE: This step is only necessary if the rotary ESP32 is not connected via a USB (i.e., it is a GPIO serial connection at "/dev/ttyS0"), but the HallPassVR_wired.py code must otherwise be modified to use "/dev/ttyUSB0". This hardwired connection will be replaced with a wireless Bluetooth connection in future versions.
    4. Plug the rotary ESP32 USB into the single-board computer USB (or other PC running the IDE) to upload the initial rotary encoder code.
  3. Set up the behavior ESP32 connections with the behavioral hardware (via OpenMaze PCB).
    NOTE: The behavior ESP32 microcontroller will control all the non-VR animal interactions (delivering non-VR stimuli and rewards, detecting mouse licks), which are connected through a general PCB "breakout board" for the ESP32, "OMwSmall", designs of which are available through the OpenMaze website (www.openmaze.org). The PCB contains the electronic components necessary for driving the electromechanical components, such as the solenoid valves used to deliver liquid rewards.
    1. Connect the 12 V liquid solenoid valve to the ULN2803 IC output on the far left of the OMwSmall PCB (pin 12 in the example setup and code). This IC gates 12 V power to the reward solenoid valve, controlled by a GPIO output on the behavior ESP32 microcontroller.
    2. Connect the lick port to the ESP32 touch input (e.g., T0, GPIO4 in the example code). The ESP32 has built-in capacitive touch sensing on specific pins, which the behavior ESP32 code uses to detect the mouse's licking of the attached metal lick port during the VR behavior.
    3. Connect the serial RX/TX wires between the behavior ESP32 Serial2 (pins 16, 17) and rotary encoder ESP32 Serial0 (see step 2.2.2).
    4. Plug the USB into the single-board computer's USB port (or other PC) to upload new programs to the behavior ESP32 for different experimental paradigms (e.g., number/location of reward zones) and to capture behavior data using the included Processing sketch.
    5. Plug the 12 V DC wall adapter into the 2.1 mm barrel jack connector on the behavior ESP32 OMwSmall PCB to provide the power for the reward solenoid valve.
    6. Plug the single-board computer's HDMI #2 output into the projector HDMI port; this will carry the VR environment rendered by the single-board computer GPU to the projection screen.
    7. (optional) Connect the synchronization wire (pin 26) to a neural imaging or electrophysiological recording setup. A 3.3 V transistor-transistor-logic (TTL) signal will be sent every 5 s to align the systems with near-millisecond precision.
  4. Set up the software: Load the firmware/software onto the rotary encoder ESP32 (Figure 2B) and behavior ESP32 (Figure 2E) using the IDE, and download the VR Python software onto the single-board computer. See https://github.com/GergelyTuri/HallPassVR/software.
    1. Plug the rotary encoder ESP32 into the single-board computer's USB port first-this will automatically be named "/dev/ttyUSB0" by the OS.
    2. Load the rotary encoder code: Open the file RotaryEncoder_Esp32_VR.ino in the IDE, and then select the ESP32 under Tools > Boards > ESP32 Dev Module. Select the ESP32 port by clicking Tools > Port > /dev/ttyUSB0, and then click on Upload.
    3. Plug the behavior ESP32 into the single-board computer's USB port next-this will be named "/dev/ttyUSB1"by the OS.
    4. Load the behavior sequence code onto the behavior ESP32 (IDE, ESP32 Dev Module already selected), then click on Tools > Port > /dev/ttyUSB1, and click on Upload: wheel_VR_behavior.ino.
    5. Test the serial connections by selecting the serial port for each ESP32 in the IDE (Tools > Port > /dev/ttyUSB0, or /dev/ttyUSB1) and then clicking on Tools > Serial Monitor (baud rate: 115,200) to observe the serial output from the rotary board (USB0) or the behavior board (USB1). Rotate the wheel to see a raw movement output from the rotary ESP32 on USB0 or formatted movement output from the behavior ESP32 on USB1.
    6. Download the graphical software package Python code from https://github.com/GergelyTuri/HallPassVR/tree/master/software/HallPassVR (to /home/pi/Documents). This folder contains all the files necessary for running the graphical software package if the pi3d Python3 package was installed correctly earlier (step 2.1).

3. Running and testing the graphical software package

NOTE: Run the graphical software package GUI to initiate a VR linear track environment, calibrate the distances on the VR software and behavior ESP32 code, and test the acquisition and online plotting of the mouse's running and licking behavior with the included Processing language sketch.

  1. Open the terminal window in the single-board computer, and navigate to the HallPassVR folder (terminal:> cd /home/pi/Documents/HallPassVR/HallPassVR_Wired)
  2. Run the VR GUI: (terminal)> python3 HallPassVR_GUI.py (the GUI window will open, Figure 3A).
  3. Graphical software GUI
    1. Select and add four elements (images) from the listbox (or select the pre-stored pattern below, and then click on Upload) for each of the three patterns along the track, and then click on Generate.
      NOTE: New image files (.jpeg) can be placed in the folder HallPassVR/HallPassVR_wired/images/ELEMENTS before the GUI is run.
    2. Select floor and ceiling images from the dropdown menus, and set the length of the track as 2 m for this example code (it must equal the trackLength in millimeters [mm] in the behavior ESP32 code and Processing code).
    3. Name this pattern if desired (it will be stored in HallPassVR_wired/images/PATH_HIST).
    4. Click the Start button (wait until the VR window starts before clicking elsewhere). The VR environment will appear on Screen #2 (projection screen, Figure 3B, C).
  4. Run the Processing sketch to acquire and plot the behavioral data/movement.
    1. Open VRwheel_RecGraphSerialTxt.pde in the Processing IDE.
    2. Change the animal = "yourMouseNumber"; variable, and set sessionMinutes equal to the length of the behavioral session in minutes.
    3. Click on the Run button on the Processing IDE.
    4. Check the Processing plot window, which should show the current mouse position on the virtual linear track as the wheel rotates, along with the reward zones and running histograms of the licks, laps, and rewards updated every 30 s (Figure 3D). Advance the running wheel by hand to simulate the mouse running for testing, or use a test mouse for the initial setup.
    5. Click on the plot window, and press the q key on the keyboard to stop acquiring behavioral data. A text file of the behavioral events and times (usually <2 MB in size per session) and an image of the final plot window (.png) is saved when sessionMinutes has elapsed or the user presses the q key to quit.
      NOTE: Due to the small size of the output .txt files, it is estimated that at least several thousand behavior recordings can be stored on the single-board computer's SD card. Data files can be saved to a thumb drive for subsequent analysis, or if connected to a local network, the data can be managed remotely.
  5. Calibrate the behavior track length with the VR track length.
    1. Advance the wheel by hand while observing the VR corridor and mouse position (on the Processing plot). If the VR corridor ends before/after the mouse reaches the end of the behavior plot, increase/decrease the VR track length incrementally (HallPassVR_wired.py, corridor_length_default, in centimeters [cm]) until the track resets simultaneously in the two systems.
      ​NOTE: The code is currently calibrated for a 6 inch diameter running wheel using a 256-position quadrature rotary encoder, so the user may have to alter the VR (HallPassVR_wired.py, corridor_length_default, in centimeters [cm]) and behavior code (wheel_VR_behavior.ino, trackLength, in millimeters [mm]) to account for other configurations. The behavioral position is, however, reset on each VR lap to maintain correspondence between the systems.

4. Mouse training and spatial learning behavior

NOTE: The mice are implanted for head fixation, habituated to head restraint, and then trained to run on the wheel and lick consistently for liquid rewards progressively ("random foraging"). Mice that achieve consistent running and licking are then trained on a spatial hidden reward task using the VR environment, in which a single reward zone is presented following a visual cue on the virtual linear track. Spatial learning is then measured as increased licking selectivity for positions immediately prior to the reward zone.

  1. Head post implantation surgery: This procedure is described in detail elsewhere in this journal and in others, so refer to this literature for specific instructions7,17,18,19,20,21.
  2. Water schedule
    1. Perform water restriction 24 hours prior to first handling (see below), and allow ad libitum water consumption following each session of habituation or head-restrained behavior. Decrease the time of water availability gradually over three days during habituation to around 5 minutes, and adjust the amount for individual mice such that their body weight does not fall below 80% of their pre-restriction weight. Monitor the weight of each animal daily and also observe the condition of each mouse for signs of dehydration22. Mice that are not able to maintain 80% of their pre-restriction body weight or appear dehydrated should be removed from the study and given free water availability.
      NOTE: Water restriction is necessary to motivate the mice to run on the wheel using liquid rewards, as well as to use spatial licking as an indication of learned locations along the track. Institutional guidelines may differ on specific instructions for this procedure, so the user must consult their individual institutional animal care committees to assure animal health and welfare during water restriction.
  3. Handling: Handle the implanted mice daily to habituate them to human contact, following which limited ad libitum water may be administered as a reinforcement (1-5 min/day, 2 days to 1 week).
  4. Habituation to the head restraint
    1. Habituate the mice to the head restraint for increasing amounts of time by placing them in the head restraint apparatus while rewarding them with occasional drops of water to reduce the stress of head fixation.
    2. Start with 5 min of head fixation, and increase the duration by 5 min increments daily until the mice are able to tolerate fixation for up to 30 min. Remove the mice from the fixation apparatus if they appear to be struggling or moving very little. However, mice generally begin running on the wheel spontaneously within several sessions, which means they are ready for the next stage of training.
      NOTE: Mice that repeatedly struggle under head restraint or do not run and lick for rewards should be regressed to earlier stages of training and removed from the study if they fail to progress for three such remedial cycles (see Table 1).
  5. Run/lick training (random foraging)
    NOTE: To perform the spatial learning task in the VR environment, the mice must first learn to run on the wheel and lick consistently for occasional rewards. The progression in the operant behavioral parameters is controlled via the behavior ESP32 microcontroller.
    1. Random foraging with non-operant rewards
      1. Run the graphical software GUI program with a path of arbitrary visual elements (user choice, see step 3.3).
      2. Upload the behavior program to the behavior ESP32 with multiple non-operant rewards (code variables: isOperant=0, numRew=4, isRandRew=1) to condition the mice to run and lick. Run the mice in 20-30 min sessions until the mice run for at least 20 laps per session and lick for rewards presented in random locations (one to four sessions).
    2. Random foraging with operant rewards on alternate laps
      1. Upload the behavior program with altOpt=1 (alternating operant/non-operant laps), and train the mice until they lick for both non-operant and operant reward zones (one to four sessions).
    3. Fully operant random foraging
      1. Upload the behavior program with four operant random reward zones (behavior ESP32 code variables: isOperant=1, numRew=4, isRandRew=1). By the end of this training step, the mice should be running consistently and performing test licks over the entire track length (one to four sessions; Figure 4A).
  6. Spatial learning
    NOTE: Perform a spatial learning experiment with a single hidden reward zone some distance away from a single visual cue by selecting a 2 m long hallway with dark panels along the track and a single high-contrast visual stimulus panel in the middle as a visual cue (0.9-1.1 m position), analogous to recent experiments with spatial olfactory cues20. Mice are required to lick at a reward zone (at a 1.5-1.8 m position) located a distance away from the visual cue in the virtual linear track environment.
    1. Run the graphical software program with a path of a dark hallway with a single visual cue in the center (e.g., chessboard, see step 3.3, Figure 3A).
    2. Upload the behavior program with a single hidden reward zone to the behavior ESP32 (behavior ESP32 code variables: isOperant=1, isRandRew=0​, numRew=1, rewPosArr[]= {1500}).
    3. Gently place the mouse in the head-fixation apparatus, adjust the lick spout to a location just anterior to the mouse's mouth, and position the mouse wheel into the center of the projection screen zone. Ensure that the head of the mouse is ~12-15 cm away from the screen after the final adjustments.
    4. Set the animal's name in the Processing sketch, and then press run in the Processing IDE to start acquiring and plotting the behavioral data (see step 3.4).
    5. Run the mouse for 30 min sessions with a single hidden reward zone and single visual cue VR hallway.
    6. Offline: download the .txt data file from the Processing sketch folder and analyze the spatial licking behavior (e.g., in Matlab with the included files procVRbehav.m and vrLickByLap.m).
      NOTE: The mice should initially perform test licks over the entire virtual track ("random foraging") and then begin to lick selectively only near the reward location following the VR visual cue (Figure 4).

Wyniki

This open-source virtual reality behavioral setup allowed us to quantify licking behavior as a read-out of spatial learning as head-restrained mice navigated a virtual linear track environment. Seven C57BL/6 mice of both sexes at 4 months of age were placed on a restricted water schedule and first trained to lick continuously at low levels while running on the wheel for random spatial rewards ("random foraging") without VR. Although their performance was initially affected when moved to the VR projection screen s...

Dyskusje

This open-source VR system for mice will only function if the serial connections are made properly between the rotary and behavior ESP32 microcontrollers and the single-board computer (step 2), which can be confirmed using the IDE serial monitor (step 2.4.5). For successful behavioral results from this protocol (step 4), the mice must be habituated to the apparatus and be comfortable running on the wheel for liquid rewards (steps 4.3-4.5). This requires sufficient (but not excessive) water restriction, as mice given ...

Ujawnienia

Clay Lacefield is the founder and maintainer of OpenMaze.org, which provides designs for the OMwSmall PCB used in this protocol free for download.

Podziękowania

We would like to thank Noah Pettit from the Harvey lab for the discussion and suggestions while developing the protocol in this manuscript. This work was supported by a BBRF Young Investigator Award and NIMH 1R21MH122965 (G.F.T.), in addition to NINDS R56NS128177 (R.H., C.L.) and NIMH R01MH068542 (R.H.).

Materiały

NameCompanyCatalog NumberComments
1/4 " diam aluminum rodMcMaster-Carr9062K263" in length for wheel axle
1/4"-20 cap screws, 3/4" long (x2)Amazon.comB09ZNMR41Vfor affixing head post holders to optical posts
2"x7" T-slotted aluminum bar (x2)8020.net1020wheel/animal mounting frame
6" diam, 3" wide acrylic cylinder (1/8" thick)Canal Plastics33210090702Running wheel (custom width cut at canalplastics.com)
8-32 x 1/2" socket head screwsMcMaster-Carr92196A194fastening head post holder to optical post 
Adjustable arm (14")Amazon.comB087BZGKSLto hold/adjust lick spout
Analysis code (MATLAB)custom writtenfile at github.com/GergelyTuri/HallPassVR/software/Analysis code
Axle mounting flange, 1/4" IDPololu1993for mounting wheel to axle
Ball bearing (5/8" OD, 1/4" ID, x2)McMaster-Carr57155K324for mounting wheel axle to frame
Behavior ESP32 codecustom writtenfile at github.com/GergelyTuri/HallPassVR/software/Arduino code/Behavior board
Black opaque matte acrylic sheets (1/4" thick)Canal Plastics32918353422laser cut file at github.com/GergelyTuri/HallPassVR/hardware/VR screen assembly
Clear acrylic sheet (1/4" thick)Canal Plastics32920770574laser cut file at github.com/GergelyTuri/HallPassVR/hardware/VR wheel assembly
ESP32 devKitC v4 (x2)Amazon.comB086YS4Z3Fmicrocontroller for behavior and rotary encoder
ESP32 shieldOpenMaze.orgOMwSmalldescription at www.openmaze.org (https://claylacefield.wixsite.com/openmazehome/copy-of-om2shield). ZIP gerber files at: https://github.com/claylacefield/OpenMaze/tree/master/OM_PCBs
Fasteners and brackets 8020.net4138, 3382,3280for wheel frame mounts
goniometersEdmund Optics66-526, 66-527optional for behavior. Fine tuning head for imaging
HallPassVR python codecustom writtenfile at github.com/GergelyTuri/HallPassVR/software/HallPassVR
Head post holdercustom design3D design file at github.com/GergelyTuri/HallPassVR/hardware/VR head mount/Headpost Clamp
LED projectorTexas InstrumentsDLPDLCR230NPEVMor other small LED projector
Lick spoutVWR20068-638(or ~16 G metal hypodermic tubing)
M 2.5 x 6 set screwsMcMaster-Carr92015A097securing head post 
Matte white diffusion paperAmazon.comscreen material
Metal headpostscustom design3D design file at github.com/GergelyTuri/HallPassVR/hardware/VR head mount/head post designs
Miscellenous tubing and tubing adapters (1/16" ID)for constructing the water line
Optical breadboardThorlabsas per user's requirements
Optical posts, 1/2" diam (2x)ThorlabsTR4for head fixation setup
Processing codecustom writtenfile at github.com/GergelyTuri/HallPassVR/software/Processing code
Raspberry Pi 4Braspberry.com, adafruit.comSingle-board computer for rendering of HallPassVR envir.
Right angle clampThorlabsRA90for head fixation setup
Rotary encoder (quadrature, 256 step)DigiKeyENS1J-B28-L00256Lto measure wheel rotation
Rotary encoder ESP32 codecustom writtenfile at github.com/GergelyTuri/HallPassVR/software/Arduino code/Rotary encoder
SCIGRIP 10315 acrylic cementAmazon.com
Shaft couplerMcMaster-Carr9861T426to couple rotary encoder shaft with axle
Silver mirror acrylic sheetsCanal Plastics32913817934laser cut file at github.com/GergelyTuri/HallPassVR/hardware/VR screen assembly
Solenoid valveParker003-0137-900to administer water rewards

Odniesienia

  1. Lisman, J., et al. Viewpoints: How the hippocampus contributes to memory, navigation and cognition. Nature Neuroscience. 20 (11), 1434-1447 (2017).
  2. Buzsaki, G., Moser, E. I. Memory, navigation and theta rhythm in the hippocampal-entorhinal system. Nature Neuroscience. 16 (2), 130-138 (2013).
  3. O'Keefe, J., Dostrovsky, J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Research. 34 (1), 171-175 (1971).
  4. O'Keefe, J. Place units in the hippocampus of the freely moving rat. Experimental Neurology. 51 (1), 78-109 (1976).
  5. Fyhn, M., Molden, S., Witter, M. P., Moser, E. I., Moser, M. B. Spatial representation in the entorhinal cortex. Science. 305 (5688), 1258-1264 (2004).
  6. Letzkus, J. J., et al. A disinhibitory microcircuit for associative fear learning in the auditory cortex. Nature. 480 (7377), 331-335 (2011).
  7. Lacefield, C. O., Pnevmatikakis, E. A., Paninski, L., Bruno, R. M. Reinforcement learning recruits somata and apical dendrites across layers of primary sensory cortex. Cell Reports. 26 (8), 2000-2008 (2019).
  8. Dombeck, D. A., Harvey, C. D., Tian, L., Looger, L. L., Tank, D. W. Functional imaging of hippocampal place cells at cellular resolution during virtual navigation. Nature Neuroscience. 13 (11), 1433-1440 (2010).
  9. Gauthier, J. L., Tank, D. W. A dedicated population for reward coding in the hippocampus. Neuron. 99 (1), 179-193 (2018).
  10. Rickgauer, J. P., Deisseroth, K., Tank, D. W. Simultaneous cellular-resolution optical perturbation and imaging of place cell firing fields. Nature Neuroscience. 17 (12), 1816-1824 (2014).
  11. Yadav, N., et al. Prefrontal feature representations drive memory recall. Nature. 608 (7921), 153-160 (2022).
  12. Priestley, J. B., Bowler, J. C., Rolotti, S. V., Fusi, S., Losonczy, A. Signatures of rapid plasticity in hippocampal CA1 representations during novel experiences. Neuron. 110 (12), 1978-1992 (2022).
  13. Heys, J. G., Rangarajan, K. V., Dombeck, D. A. The functional micro-organization of grid cells revealed by cellular-resolution imaging. Neuron. 84 (5), 1079-1090 (2014).
  14. Harvey, C. D., Collman, F., Dombeck, D. A., Tank, D. W. Intracellular dynamics of hippocampal place cells during virtual navigation. Nature. 461 (7266), 941-946 (2009).
  15. . Harvey Lab Mouse VR Available from: https://github.com/Harvey/Lab/mouseVR (2021)
  16. Pettit, N. L., Yap, E. L., Greenberg, M. E., Harvey, C. D. Fos ensembles encode and shape stable spatial maps in the hippocampus. Nature. 609 (7926), 327-334 (2022).
  17. Turi, G. F., et al. Vasoactive intestinal polypeptide-expressing interneurons in the hippocampus support goal-oriented spatial learning. Neuron. 101 (6), 1150-1165 (2019).
  18. Ulivi, A. F., et al. Longitudinal two-photon imaging of dorsal hippocampal CA1 in live mice. Journal of Visual Experiments. (148), e59598 (2019).
  19. Wang, Y., Zhu, D., Liu, B., Piatkevich, K. D. Craniotomy procedure for visualizing neuronal activities in hippocampus of behaving mice. Journal of Visual Experiments. (173), e62266 (2021).
  20. Tuncdemir, S. N., et al. Parallel processing of sensory cue and spatial information in the dentate gyrus. Cell Reports. 38 (3), 110257 (2022).
  21. Dombeck, D. A., Khabbaz, A. N., Collman, F., Adelman, T. L., Tank, D. W. Imaging large-scale neural activity with cellular resolution in awake, mobile mice. Neuron. 56 (1), 43-57 (2007).
  22. Guo, Z. V., et al. Procedures for behavioral experiments in head-fixed mice. PLoS One. 9 (2), 88678 (2014).
  23. Jordan, J. T., Gonçalves, J. T. Silencing of hippocampal synaptic transmission impairs spatial reward search on a head-fixed tactile treadmill task. bioRxiv. , (2021).
  24. Urai, A. E., et al. Citric acid water as an alternative to water restriction for high-yield mouse behavior. eNeuro. 8 (1), (2021).
  25. Saleem, A. B., Diamanti, E. M., Fournier, J., Harris, K. D., Carandini, M. Coherent encoding of subjective spatial position in visual cortex and hippocampus. Nature. 562 (7725), 124-127 (2018).
  26. Ravassard, P., et al. Multisensory control of hippocampal spatiotemporal selectivity. Science. 340 (6138), 1342-1346 (2013).
  27. Aghajan, Z. M., et al. Impaired spatial selectivity and intact phase precession in two-dimensional virtual reality. Nature Neuroscience. 18 (1), 121-128 (2015).

Przedruki i uprawnienia

Zapytaj o uprawnienia na użycie tekstu lub obrazów z tego artykułu JoVE

Zapytaj o uprawnienia

Przeglądaj więcej artyków

Open sourceVirtual RealitySpatial LearningHead restrained MiceModular Electronic SetupBehavioral SetupsESP32Rotary EncoderPsychophysicsNeuroimagingGraphical Software EnvironmentExperimental ParadigmsBehavior Data

This article has been published

Video Coming Soon

JoVE Logo

Prywatność

Warunki Korzystania

Zasady

Badania

Edukacja

O JoVE

Copyright © 2025 MyJoVE Corporation. Wszelkie prawa zastrzeżone