JoVE Logo

Zaloguj się

Aby wyświetlić tę treść, wymagana jest subskrypcja JoVE. Zaloguj się lub rozpocznij bezpłatny okres próbny.

W tym Artykule

  • Podsumowanie
  • Streszczenie
  • Wprowadzenie
  • Protokół
  • Wyniki
  • Dyskusje
  • Ujawnienia
  • Podziękowania
  • Materiały
  • Odniesienia
  • Przedruki i uprawnienia

Podsumowanie

Here, we present a protocol for single particle tracking image analysis that allows quantitative evaluation of diffusion coefficients, types of motion and cluster sizes of single particles detected by fluorescence microscopy.

Streszczenie

Particle tracking on a video sequence and the posterior analysis of their trajectories is nowadays a common operation in many biological studies. Using the analysis of cell membrane receptor clusters as a model, we present a detailed protocol for this image analysis task using Fiji (ImageJ) and Matlab routines to: 1) define regions of interest and design masks adapted to these regions; 2) track the particles in fluorescence microscopy videos; 3) analyze the diffusion and intensity characteristics of selected tracks. The quantitative analysis of the diffusion coefficients, types of motion, and cluster size obtained by fluorescence microscopy and image processing provides a valuable tool to objectively determine particle dynamics and the consequences of modifying environmental conditions. In this article we present detailed protocols for the analysis of these features. The method described here not only allows single-molecule tracking detection, but also automates the estimation of lateral diffusion parameters at the cell membrane, classifies the type of trajectory and allows complete analysis thus overcoming the difficulties in quantifying spot size over its entire trajectory at the cell membrane.

Wprowadzenie

Membrane proteins embedded in the lipid bilayer are in continuous movement due to thermal diffusion. Their dynamics are essential to regulate cell responses, as intermolecular interactions allow formation of complexes that vary in size from monomers to oligomers and influence the stability of signaling complexes. Elucidating the mechanisms controlling protein dynamics is thus a new challenge in cell biology, necessary to understand signal transduction pathways and to identify unanticipated cell functions.

Some optical methods have been developed to study these interactions in living cells1. Among these, total internal reflection fluorescence (TIRF) microscopy, developed in the early 1980s, allows the study of molecular interactions at or very near the cell membrane2. To study dynamic parameters of membrane protein trajectories obtained from TIRF data in living cells, a single particle tracking method (SPT)is required. Although several algorithms are available for this, we currently use those published by Jaqaman et al.3 that address particle motion heterogeneity in a dense particle field by linking particles between consecutive frames to connect the resulting track segments into complete trajectories (temporary particle disappearance). The software captures the particle merging and splitting that result from aggregation and dissociation events3. One of the output data of this software is detection of the particles along the entire trajectory by defining their X and Y positions in each frame.

Once particles are detected, we apply different algorithms to determine the short timelag diffusion coefficient (D1-4)4,5. By applying the Moment Scaling Spectrum (MSS)6,7,8 analysis or by fitting the 'alpha' value by adjustment of the Mean Square Displacement (MSD) to the curve9, we also classify the particles according to the type of trajectory.

Analysis of spot intensity in fluorescence images is a shared objective for scientists in the field10,11. The most common algorithm used is the so-called Number and Brightness. This method nonetheless does not allow correct frame-by-frame intensity detection in particles in the mobile fraction. We have, thus, generated a new algorithm to evaluate these particle intensities frame-by-frame and to determine their aggregation state. Once the coordinates of each particle are detected using U-Track2 software3, we define its intensity in each frame over the complete trajectory, also taking into account the cell background in each frame. This software offers different possibilities to determine the spot intensity and the cell background and, using known monomeric and dimeric proteins as references, calculates the approximate number of proteins in the particle detected (cluster size).

In this article, we describe a careful guide to perform these three steps: 1) detecting and tracking single particles along a video of fluorescence microscopy using U-track; 2) analyzing the instantaneous diffusion coefficient (D1-4) of those particles and the type of movement (confined, free, or directed) of particles with long trajectories by MSS; 3) measuring the spot intensity along the video corrected by the estimated background fluorescence for each spot. This allows cluster size estimation and identification of the photobleaching steps.

The use of this protocol does not require specialized skills and can be performed in any laboratory with cell culture, flow cytometry and microscopy facilities. The protocol uses ImageJ or Fiji (a distribution of ImageJ12), U-track3, and some ad hoc made routines (http://i2pc.es/coss/Programs/protocolScripts.zip). U-track and ad hoc routines run over Matlab that can be installed in any compatible computer.

Protokół

1. Preparation of Biological Samples

  1. Grow Jurkat cells in RPMI 1640 medium supplemented with 10% FCS, NaPyr and L-glutamine (complete RPMI). Electroporate Jurkat cells (20 x 106 cells/400 µL of RPMI 1640 with 10% FCS) with a monomeric GFP-labelled chemokine receptor vector (CXCR4-AcGFP, 20 μg) to allow its detection using fluorescence microscopy.
    NOTE: It is possible to use other monomeric fluorescent proteins such as mCherry, mScarlet, etc.
  2. 24 h after transfection analyze cells in a flow cytometer to determine both cell viability and CXCR4-AcGFP expression.
  3. Select cells expressing low CXCR4-AcGFP levels by cell sorting of GFPlow positive cells (Figure 1), as low expression of transfected receptor is required in TIRFM experiments in order to ensure single particle tracking for tracing individual trajectories9.
  4. Quantify the number of receptors in the cell surface.
    NOTE: As an example13, ~8,500 - 22,000 AcGFP-labeled receptors/cell, correspond to ~2 - 4.5 particles/μm2.
  5. Resuspend sorted cells in complete RPMI and incubate for at least 2 h at 37 °C, 5% CO2. Centrifuge cells (300 x g, 5 min), and resuspended them in TIRF buffer (HBSS, 25 mM HEPES, 2% FCS, pH 7.3).
    1. Plate on 35 mm glass-bottomed microwell dishes (2-3 x 105 cell/dish) coated with fibronectin (20 μg/mL, 1 h, 37 °C) in the presence or absence of appropriate ligand (i.e., CXCL12, 100 nM, 1 h, 37 °C). Incubate cells (20 min at 37 °C, 5% CO2) prior to image acquisition.
  6. Perform experiments using a TIRF microscope, equipped with an EM-CCD camera, a 100x oil-immersion objective (HCX PL APO 100x/1.46 NA) and a 488 nm diode laser. The microscope allows temperature control and incubation with CO2. Locate and focus cells with the coarse and fine focus knobs, using bright field to minimize photobleaching effects. For fine focus adjustment in TIRF mode use a low laser intensity, insufficient for single-particle detection or to induce photobleaching effects (5% laser power, 28 μW).
  7. Acquire movies (image sequences) of approximately 50 s minimizing the time interval between frames. Penetration of the evanescent field should be 70-90 nm of depth. Save the acquired movies for each experimental condition as ".lif" (video.lif).
    NOTE: Movies in the described example were acquired at 49% at laser power (2 mW) with an exposure time of 90 ms, and a time interval of 98 ms, for 49 s (500 frames). Penetration of the selected evanescent wave was 90 nm.

2. Selection of Images and Creation of Masks

  1. For each experimental condition (video.lif), create a new folder (VideoName) that must contain different folders for every series. Each folder will contain a "videoSeq" folder for the video images and a "results" folder for the results of the analysis. Make sure that the file structure at this moment is the following:
    VideoName/video.lif
    VideoName/Series1/videoSeq
    VideoName/Series1/results
    NOTE: From the microscope different .lif files are obtained with several videos for every treatment condition (i.e. FN, FN+SDF). "video.lif" corresponds to the input.lif video file with all TIRF movies acquisitions (Series) performed at the microscope. "videoSeq" folder will contain the 500 frames of the movie that we are analyzing. The "results" folder will contain all files resulting from the analysis performed. Accurate nomenclature and localization of the different folders are essential for the correct function of the algorithms. Names in bold in the list above are fixed (i.e., they have to be called in this way because these are the names sought by the scripts). Names not in bold can change to reflect the experiment performed.
  2. Open the TIRFM video (.lif file) with Fiji or ImageJ by dragging and dropping the file on the Fiji menu bar and click on OK to import the lif file using BioFormats (Supplementary Figure 1).
  3. Select the series to process and click OK (Supplemental Figure 2A). To design a mask for the analysis of this video, import also a multichannel image with the different chromophores (in the example, Series 1 is the multichannel image and Series 2 the corresponding video). The video (and the multichannel image) should open as an ImageJ stack. In the example (Supplemental Figure 2B), the image is on the left and the video is on the right.
    NOTE: If creation of a mask for the video is not needed, go to Step 2.5.
  4. Create a mask. Create a single image with the channels useful for the design of the mask. In this case, the interesting channels are the red, green and gray ones.
    1. Split the channels from the multichannel image (Supplementary Figure 3A): select Image in the bar menu and click Color | Split channels. The different channels will show as separate images (Supplementary Figure 3B).
    2. Merge again the three channels in a single image (Supplemental Figure 4A): select Image in the bar menu and select Color | Merge channels. Select the appropriate channels and press OK (Supplemental Figure 4B). A new non-stacked image will be generated (Supplemental Figure 4C).
    3. Synchronize the two windows by using the Synchronize Windows tool (Supplemental Figure 5A): select Analyze in the bar menu | Tools | Sync Windows. A new window with the synchronize image possibilities will be shown (Supplemental Figure 5B).
    4. With the two windows synchronized (only the video if there is no multichannel image associated), the same region in both windows can be cropped. Draw the region of interest with the rectangular selection tool of ImageJ floating menu. Select Image in the bar menu and select Crop (Supplemental Figure 6A). The two cropped images will show individually (Supplemental Figure 6B).
    5. Unsynchronize both windows (Supplemental Figure 6B) by pressing the Unsynchronize All button in the Sync Windows manager.
  5. If a mask has not been created as in step 2.4, draw the region of interest with the rectangular selection tool of ImageJ floating menu.
  6. Save the video as an Image sequence in the directory videoSeq under the corresponding video directory (Supplemental Figure 7A): select File in the bar menu and click Save as | Image Sequence…. Rename the labels for the video sequence as video0000.tif, video0001.tif, …, video0499.tif (Supplemental Figure 7B): in the Name box, rename as video and click OK. The sequence must be alone in its directory to be successfully used by U-track.
    NOTE: If not designing a mask for the video, go to Step 2.8.
  7. Design a Mask. Select the multichannel image and open the Segmentation Editor plugin (Supplemental Figure 8A): select Plugins in the bar menu and select Segmentation | Segmentation editor. Add and rename the labels of the segmentation as necessary by right clicking on the labels of the Segmentation editor (Supplemental Figure 8B,C).
    1. Choose the appropriate selection tool in ImageJ floating menu (here, use freehand), select a label (Green) and design first the outermost mask (Supplemental Figure 9A). Once designed, press the + button in Selection option of Composite window, and the selected mask will be displayed on the viewer (Supplemental Figure 9A). Repeat this step with next labels (Interior, in red) (Supplemental Figure 9B).
      NOTE: After designing the mask for the Green and Interior labels, the Exterior mask will occupy the rest of the image.
    2. Masks are coded in the image as regions 0, 1, 2, … according to the order of the labels in the RGB labels window. When all masks for the different labels are designed, save the mask with the same filename as the video, with the name mask.tif (Supplemental Figure 9C): select File in the bar menu and select Save as | Tiff….
      NOTE: The selected masks will be employed in the calculation of the diffusion coefficients and classification of the trajectories (see step 4.2).
  8. Check that the file structure at this moment is the following:
    VideoName/video.lif
    VideoName/Series1/ mask.tif
    VideoName/Series1/ videoSeq/ video0000.tif
    VideoName/Series1/ videoSeq/ video0001.tif
    ...
    VideoName/Series1/ videoSeq/ video0499.tif
    VideoName/Series1/ results
    NOTE: The mask.tif is an image with the mask as designed in Steps 2.4 and 2.7. The video*.tiff is the video as saved in Step 2.6. As above, the names in bold in the list above are fixed, i.e., they have to be called in this way because these are the names sought by the scripts. Names not in bold can change to reflect the experiment performed.

3. Tracking the Particles

  1. Track all the particles seen in the selected videos using U-track.
  2. Open Matlab and add U-track directory to the path by using Set Path | Add with Subfolders option in the menu. Save the path so that in future executions of Matlab U-track is in the path. This path setting needs to be done only once.
  3. Change the working directory to the directory containing the Series to be analyzed. Invoke U-track by typing in the console (Supplemental Figure 10) movieSelectorGUI and press enter. The Movie selection window will be opened (Supplemental Figure 11A).
  4. Press on the New movie button and the Movie edition window will appear (Supplemental Figure 11B).
  5. Press on Add channel to choose the directory with the video (VideoName/Series1/video) and fill the movie information parameters. Set the output directory for the results of U-track to Results (videoName/Series1/results).
    NOTE: The movie information parameters can be obtained from the microscope and the acquisition conditions.
  6. Press on the Advanced channel settings and fill the parameters related to the acquisition. Look the values of the parameters in the example (Supplemental Figure 11C).
  7. Press Save on the Advanced channel settings window and Save on the Movie edition window. The program will ask for confirmation of writing the file called movieData.mat on the results directory. Confirm.
  8. After creating the movie, press on Continue in the movie selection window. U-track will ask about the type of object to be analyzed. Choose Single-particles (Supplemental Figure 12). The Control panel window will appear (Supplemental Figure 13A).
    1. Select the first step Step 1: Detection and press on Setting. The Setting Gaussian Mixture-Model Fitting window will appear (Supplemental Figure 13B). In the example, "Alpha value for comparison with local background" is set to 0.001 and "Rolling-Window time averaging" to 3 (Supplemental Figure 13B).
    2. Press on Apply in the Settings Gaussian Mixture-Model Fitting window and Run in the Control panel. With the configuration in Supplemental Figure 13, only the Detection step runs. This step takes a few (2-5) minutes. Check the results by pressing the Result button of the Step 1 (Detection, Supplemental Figure 14).
      NOTE: As shown above, the movie shows red circles on the detected particles. If no red circle is shown, then this step has not worked correctly.
  9. Perform the identification of tracks, that is, merging the particles detected in the previous step into tracks that span multiple frames. This is the Step 2: Tracking of U-track whose settings have to be defined as shown in Supplemental Figure 15A-C. The Step 2 cost function settings for frame-to-frame linking and Gap closing, merging and splitting in the example are shown in Supplemental Figure 15B and C, respectively.
  10. After setting the parameters for Step 2, press Run in the Control panel and only Step 2 runs (Supplemental Figure 16).
  11. Perform track analysis, Step 3. Define the settings as shown in Supplemental Figure 17 (right panel). Then, press Apply in the panel of Setting-Motion Analysis, and Run in the Control Panel-U-Track. This step takes a few seconds.
  12. Verify with the Result button of Step 3 that the process has correctly identified all the tracks. For doing so, click on Show track number of the Movie options window, and check frame by frame that each track has been correctly identified (Supplemental Figure 18). Manually annotate those particles that are not true particles.
    NOTE: If this manual selection is not done, a weaker automatic selection can be performed later when the diffusion coefficient is calculated (see Step 4).

4. Calculation of the Diffusion Coefficients and Classification of Trajectories

  1. Be sure that all the scripts are invoked from the directory of the video being analyzed (in the example, VideoName/Serie1).
  2. Read all the trajectories to compute the diffusion coefficients by issuing in the Matlab console the command: trajectories=readTrajectories(0.1), where 0.1 is the time in seconds between two consecutive frames (time interval, shown in panel Movie Information, Supplemental Figure 11B).
  3. Exclude trajectories corresponding to incorrectly identified spots/trajectories. Give a list of the spots to exclude. For instance, to exclude spots 4, 5 and 28, type: trajectories=readTrajectories(0.1, [4, 5, 28]).
  4. Calculate the instantaneous diffusion coefficients for each one of the tracks of this cell. In this case, calculate the diffusion coefficient for a time lag = 4, called D1-4. For doing so, run in the Matlab console the command: D=calculateDiffusion(trajectories, 113.88e-3, 0.0015, 'alpha') where trajectories are the trajectories obtained in Step 3, 113.88e-3 is the pixel size of the acquired images in microns, 0.0015 is an upper bound for the diffusion coefficients of immobile particles measured in µm2/s, and 'alpha' is the fitted model as explained below.
    NOTE: When using a faster camera and need more frames to calculate the diffusion parameter increase it, e.g. to 20, by D=calculateDiffusion(trajectories, 113.88e-3, 0.0015, 'alpha', '', 20). The string parameter before 20, in the example above, '', is the suffix added to the output files. This suffix may be used to differentiate different analyses.
  5. Fit the MSD with a different function by calling the calculateDiffusion function again with a different fitting mode ('confined', 'free', or 'directed'). In this example, 'confined': D=calculateDiffusion(trajectories, 113.88e-3, 0.0015, 'confined').
  6. Obtain the fitting results for the directed model, as shown in Supplemental Figure 20.
  7. Decompose the trajectories into short and long trajectories. Use the command: [shortTrajectories, longTrajectories]=separateTrajectoriesByLength(trajectories,50) where 50 is the minimum length in frames of a trajectory to be considered long (in the example shown).
  8. Study short trajectories using the same fitting procedure described in Step 4.3: D=calculateDiffusion(shortTrajectories, 113.88e-3, 0.0015, 'directed', 'Short'). Analyze short and anomalous trajectories with the command: D=calculateDiffusion(shortTrajectories, 113.88e-3, 0.0015, 'alpha', 'Short').
  9. Analyze long trajectories to classify the type of motion through their Moment Scaling Spectrum (MSS)7. The command: trajectoriesClassification=classifyLongTrajectories(longTrajectories,113.88e-3,0.0015,'Long') shows the analysis in screen and generates a file called trajectoryClassification<Suffix>.txt in the directory results\TrackingPackage\tracks.

5. Calculation of Cluster Ssize Through the Particle Density

NOTE: Be sure that all the scripts are invoked from the directory of the video being analyzed (in the example shown, VideoName/Serie1).

  1. Analyze the intensity of each particle along their trajectory. For doing so, invoke the script by typing in the Matlab console: analyzeSpotIntensities that takes as input the trajectories calculated by U-track in the first section. In its most basic form, simply call the script without any argument from the directory of the video being analyzed (in the example shown, VideoName/Series1) analyzeSpotIntensities(). Configure this basic behavior in many different ways by providing arguments to the script as in: analyzeSpotIntensities(`Arg1´, Value1, `Arg2´, Value2, …). Valid arguments with their corresponding variable values (`ArgN´, ValueN) are listed.
    1. ('spotRadius´, 1)
      Analyze the fluorescence intensity using the spotRadius of 1 pixel (by default) that corresponds to a patch of size 3x3 centered at the spot ((2*spotRadius+1)x(2*spotRadius+1)).
      NOTE: For a patch of 5x5 centered at the spot, choose a spotRadius of 2, etc.
    2. ('onlyInitialTrajectories´, 1)
      If this argument is given (true, value of 1), analyze only the trajectories that start in the first frame of the video. This is useful to analyze control images (by default 0, false).
    3. ('trackTrajectory´, 0)
      If this argument is set to 0 (false), then keep the coordinate of the spot in its first frame for all frames (this is useful for immobile spots). If the argument is set to 1 (by default 1, true), then the spot is tracked along the video following the coordinates calculated by U-track.
    4. ('excludeTrajectories´, [4,5,28])
      Include the trajectory number of those trajectories excluded in Step 4.3 (in the example 4, 5, 28).
    5. ('extendTrajectory ´, 1)
      If this argument is set to 1 (true), then analyze the intensity in the patch to the end of the video (even if the trajectory stops earlier). The coordinate of the spot is either the last coordinate in the trajectory (if trackTrajectory is true) or the first coordinate in the trajectory (if trackTrajectory is false). This argument is false (0) by default.
    6. ('subtractBackground´, 1)
      If this parameter is set, then correct the raw fluorescence measured at each spot by the estimate of the background fluorescence for that spot (see below). This argument is true (1), by default.
    7. ('meanLength´, frame number)
      If this parameter is set, then the mean intensity spot is measured at the length indicated. Set `meanLength', 20 to measure the mean spot intensity at the first 20 frames. If the argument is not set, then the spot intensity is calculated at the whole trajectory (by default, full length).
    8. ('showIntensityProfiles´, 1)
      Set this parameter as 1 (by default 0, false), to plot the intensity profile along the different frames as well as their background.
      NOTE: These plots are very useful to identify photobleaching as shown in the Supplemental Figure 21. For every path, the routine automatically analyzes if it is possible that there has been photobleaching. This is done by comparing the intensity values with a Student's t in the first and last N frames along the path. By default, N is 10, but this value can be modified through the argument `Nbleach´.
    9. ('backgroundMethod´, value)
      Set this parameter to determine the background of each spot. This can be done in several ways, and which one to use can be selected changing the "value":
      1. ('backgroundMethod´, 0)
        Use this value to manually identify the background for the whole video. Allow to select 8 points in the first frame of the video. A patch around these points is analyzed along the whole video, and the 95% quantile of all these intensities is chosen as the background intensity for all spots.
      2. ('backgroundMethod´, 1)
        Use this value to manually identify the background for each frame. Choose 8 points for every spot and every frame. This is a time consuming task but it gives a lot of control to the user. The 95% quantile of the intensities in these patches is chosen as the background intensity for this spot in this frame.
      3. ('backgroundMethod´, 2)
        Use this value to calculate the background of each spot estimated from 8 points located in a circle around the spot with a radius controlled by the argument `backgroundRadius´ (by default, 4*spotRadius).
      4. ('backgroundMethod´, 3)
        Use this value to calculate the background for each frame by first locating the cell in the video and then analyzing the intensities of the cell in each frame (Supplemental Figure 22).
        ​NOTE: The background is chosen as the gray value at a given quantile of this distribution (by default 0.5 (=50%), although this parameter can be controlled through the argument `backgroundPercentile´, this value can be set higher, for instance, 0.9 (=90%) if wanting most of the cell to be considered as background. To help in the identification of the cell, indicate which is the maximum background value expected along the frames using the argument `maxBackground´ (for instance, in all the analyzed videos, the background value normally never goes beyond 6000)13. By default, this option is set to 0, meaning that this help is not used by default. See which is the cell detection and the area selected for the background estimation by setting the argument `showImages´ to 1 (stop the execution at any time by pressing CTRL-C).
  2. Gather the diffusion and intensity information for all the trajectories calculated in the Steps 4 and 5.1, respectively, using gatherDiffusion.AndIntensity (). Gather only the diffusion and intensity information for short trajectories. For doing so, use the suffixes used in Step 4.7, and type: gatherDiffusionAndIntensity (`Short´) where `Short´ is the suffix used in Step 4.7.
  3. Gather the Moment Spectrum Scaling and the intensity informationby typing: gatherTrajectoryClassificationAndIntensity('Long') where `Long' is the suffix used in Step 4.7. A summary of all the files generated using this protocol is shown in Figure 2.

Wyniki

The use of this protocol allows the automated tracking of particles detected in fluorescence microscopy movies and the analysis of their dynamic characteristics. Initially, cells are transfected with the fluorescently-coupled protein to be tracked. The appropriate level of receptors presents on the cell surface that allows SPT is obtained by cell sorting (Figure 1). Selected cells are analyzed by TIRF microscopy that generates videos in a format that can be s...

Dyskusje

The described method is easy to perform even without having any previous experience working with Matlab. However, Matlab routines require extremely accuracy with the nomenclature of the different commands and the localization of the different folders employed by the program. In the tracking analysis routine (step 3), multiple parameters can be modified. The "Setting Gaussian-Mixture Model Fitting" window (step 3.8) controls how U-track will detect single particles on the video. This is done by fitting a Gaussian ...

Ujawnienia

The authors have nothing to disclose.

Podziękowania

We are thankful to Carlo Manzo and Maria García Parajo for their help and source code of the diffusion coefficient analysis. This work was supported in part by grants from the Spanish Ministry of Science, Innovation and Universities (SAF 2017-82940-R) and the RETICS Program of the Instituto de salud Carlos III (RD12/0009/009 and RD16/0012/0006; RIER). LMM and JV are supported by the COMFUTURO program of the Fundación General CSIC.

Materiały

NameCompanyCatalog NumberComments
Human Jurkat cellsATCCCRL-10915Human T cell line. Any other cell type can be analyzed with this software
pAcGFPm-N1 (PT3719-5)DNA3GFPClontech632469Different fluorescent proteins can be followed and analyzed with this routine
Gene Pulse X Cell electroporator BioRad We use 280 V, 975 mF, for Jurkat cells.  Use the transfection method best working in your hands. 
Cytomics FC 500 flow cytometer Beckman Coulter
MoFlo Astrios Cell Sorter Beckman CoulterDepending on the level of transfection, cell sorting may not be required.  You can also employ cells with stable expression of adequate levels of the receptor of interest.
Dako QifikitDakoCytomationK0078Used for quantification the number of receptors in the cell surface.
Glass bottom microwell dishesMatTek corporationP35G-1.5-10-C
Human Fibronectin from plasmaSigma-AldrichF0895
Recombinant human CXCL12PeproTech300928A
Inverted Leica AM TIRFLeica
EM-CCD cameraAndor DU 885-CSO-#10-VP
MATLABThe MathWorks, Natick, MA
U-Track2 softwareDanuser Laboratory
ImageJNIHhttps://imagej.nih.gov/ij/
FiJiFiJIhttps://imagej.net/Fiji)
u-Track2 softwareMatlab tool.  For installing, download .zip file from the web page (http://lccb.hms.harvard.edu/software.html) and uncompress the file in a directory of your choice
GraphPad PrismGraphPad software

Odniesienia

  1. Yu, J. Single-molecule studies in live cells. Annual Review of Pysical Chemistry. 67 (565-585), (2016).
  2. Mattheyses, A. L., Simon, S. M., Rappoport, J. Z. Imaging with total internal reflection fluorescence microscopy for the cell biologist. Journal of Cell Science. 123 (Pt 21), 3621-3628 (2010).
  3. Jaqaman, K., et al. Robust single-particle tracking in live-cell time-lapse sequences. Nature Methods. 5 (8), 695-702 (2008).
  4. Bakker, G. J., et al. Lateral mobility of individual integrin nanoclusters orchestrates the onset for leukocyte adhesion. Proceedings of the National Academy of Sciences U S A. 109 (13), 4869-4874 (2012).
  5. Kusumi, A., Sako, Y., Yamamoto, M. Confined lateral diffusion of membrane receptors as studied by single particle tracking (nanovid microscopy). Effects of calcium-induced differentiation in cultured epithelial cells. Biophysical Journal. 65 (5), 2021-2040 (1993).
  6. Ferrari, R. M., Manfroi, A. J., Young, W. R. Strongly and weakly self-similar diffusion. Physica D. 154, 111-137 (2001).
  7. Sbalzarini, I. F., Koumoutsakos, P. Feature point tracking and trajectory analysis for video imaging in cell biology. Journal of Structural Biology. 151 (2), 182-195 (2005).
  8. Ewers, H., et al. Single-particle tracking of murine polyoma virus-like particles on live cells and artificial membranes. Proceedings of the National Academy of Sciences U S A. 102 (42), 15110-15115 (2005).
  9. Manzo, C., Garcia-Parajo, M. F. A review of progress in single particle tracking: from methods to biophysical insights. Report on Progress in Physics. 78 (12), 124601 (2015).
  10. Calebiro, D., et al. Single-molecule analysis of fluorescently labeled G-protein-coupled receptors reveals complexes with distinct dynamics and organization. Proceedings of the National Academy of Sciences U S A. 110 (2), 743-748 (2013).
  11. Digman, M. A., Dalal, R., Horwitz, A. F., Gratton, E. Mapping the number of molecules and brightness in the laser scanning microscope. Biophysical Journal. 94 (6), 2320-2332 (2008).
  12. Schindelin, J., et al. Fiji: an open-source platform for biological-image analysis. Nature Methods. 9 (7), 676-682 (2012).
  13. Martinez-Munoz, L., et al. Separating Actin-Dependent Chemokine Receptor Nanoclustering from Dimerization Indicates a Role for Clustering in CXCR4 Signaling and Function. Molecular Cell. 70 (1), 106-119 (2018).
  14. Destainville, N., Salome, L. Quantification and correction of systematic errors due to detector time-averaging in single-molecule tracking experiments. Biophysical Journal. 90 (2), L17-L19 (2006).

Przedruki i uprawnienia

Zapytaj o uprawnienia na użycie tekstu lub obrazów z tego artykułu JoVE

Zapytaj o uprawnienia

Przeglądaj więcej artyków

Image ProcessingProtocolDiffusion AnalysisCluster SizeMembrane ReceptorsFluorescence MicroscopySingle molecule TrackingLateral Diffusion ParametersImageJMATLABUTrackTracking ParticlesCell MembranesBiophysics ResearchTherapeutic TargetsJurkat Cells

This article has been published

Video Coming Soon

JoVE Logo

Prywatność

Warunki Korzystania

Zasady

Badania

Edukacja

O JoVE

Copyright © 2025 MyJoVE Corporation. Wszelkie prawa zastrzeżone