JoVE Logo

Sign In

A subscription to JoVE is required to view this content. Sign in or start your free trial.

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Results
  • Discussion
  • Disclosures
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

The article shows how to use the program SpikeSorter to detect and sort spikes in extracellular recordings made with multi-electrode arrays.

Abstract

Few stand-alone software applications are available for sorting spikes from recordings made with multi-electrode arrays. Ideally, an application should be user friendly with a graphical user interface, able to read data files in a variety of formats, and provide users with a flexible set of tools giving them the ability to detect and sort extracellular voltage waveforms from different units with some degree of reliability. Previously published spike sorting methods are now available in a software program, SpikeSorter, intended to provide electrophysiologists with a complete set of tools for sorting, starting from raw recorded data file and ending with the export of sorted spikes times. Procedures are automated to the extent this is currently possible. The article explains and illustrates the use of the program. A representative data file is opened, extracellular traces are filtered, events are detected and then clustered. A number of problems that commonly occur during sorting are illustrated, including the artefactual over-splitting of units due to the tendency of some units to fire spikes in pairs where the second spike is significantly smaller than the first, and over-splitting caused by slow variation in spike height over time encountered in some units. The accuracy of SpikeSorter's performance has been tested with surrogate ground truth data and found to be comparable to that of other algorithms in current development.

Introduction

Anyone who records extracellular signals from the brain using methods more sophisticated than simple on-line thresholding and windowing faces the task of identifying and separating the signals from different neurons from the noisy voltage signals recorded by the electrode. This task is commonly known as spike sorting. The difficulty of spike sorting is compounded by various factors. Neurons can be very close together so that the signals recorded from them by a nearby electrode are likely to be similar and hard to distinguish. The signals produced by a single neuron may vary over time, perhaps because of movements of the electrode, variable sodium channel kinetics during periods of high firing rate, variable degrees of activation of voltage conductances in dendrites that are close to the electrode, or possibly as a result of changes in brain state. These problems can be mitigated by using multi-electrode arrays (MEAs) with many closely spaced (20 - 100 µm) recording channels which allows better spatial definition of the signals from single neurons since they are typically spread out over several channels1,2. However, this, combined with the fact that signals from neurons spread along the entire length of the electrode overlap in space, results in a potentially very high dimensional space within which clusters corresponding to unique neurons need to be identified. This problem becomes computationally intractable for more than a small number of electrode channels. To date, there is no generally agreed-upon best method for spike sorting, though many solutions have been proposed3,4,5,6,7,8 and recordings from MEAs are becoming increasingly common9,10. Because spike sorting is not an end in itself, but is simply a necessary preliminary step before further data analysis, there is a need for an easily usable package that will read in raw recording data files and convert them to sorted spike trains with as little user input, and as quickly and reliably, as possible.

This paper provides a tutorial for the use of SpikeSorter — a program developed with the aim of meeting these needs. The program is based on algorithms described in previously published papers11,12,13. The goals in designing the program were that a) it should have a user-friendly interface requiring little or no prior knowledge of computer programming or of spike sorting methodology; b) few or no other specialized software components beyond standard Windows or Linux operating systems should be needed; c) a wide range of recording data formats for data import and export should be supported; d) the need for user input during sorting should be minimized, and e) sorting times should scale in a reasonable way, ideally linearly, with recording duration and the number of channels on the electrode. The algorithms implemented in the program include a) a flexible set of pre-processing and event detection strategies; b) an automated divide and conquer strategy of dimension reduction which clusters voltage waveforms based on the principal components (PC) distributions obtained from subsets of channels assigned to specific clusters; c) automated clustering of PC distributions with a fast clustering procedure based on the mean-shift algorithm3,14, and d) partially automated pairwise merging and splitting of clusters to ensure that each is as distinct as possible from all others. To this, a set of procedures has been added that allow manual splitting or merging of clusters based on inspection of PC distributions, cross- and auto-correlograms of spike trains and time-amplitude plots of spike waveforms. Recordings from tetrodes, tetrode arrays, Utah arrays as well as single and multi-shank MEAs can be read and sorted. The current limit on the number of channels is 256 but this may be increased in future.

Another cross-platform open-source implementation, "spyke" (http://spyke.github.io), is also available. Written by one of us (MS) in Python and Cython, spyke uses the same overall approach as SpikeSorter, with some differences: to reduce memory demands, raw data is loaded in small blocks, and only when absolutely necessary; clusters are exclusively displayed, manipulated, and sorted in 3D; and principal component and independent component analysis are both used as complementary dimension reduction methods. Spyke requires more user interaction, but relies heavily on keyboard and mouse shortcuts and an undo/redo queue to rapidly explore the effects of various factors on the clustering of any given subset of spikes. These factors include spike channel and time range selection, spike alignment, clustering dimensions and spatial bandwidth (sigma)11.

The following is a brief description of the algorithms and strategies used for sorting. More complete descriptions can be found in previous publications11,12,13 and in annotations that can be accessed via help buttons (identified with a '?') within SpikeSorter. After loading a raw extracellular voltage file and filtering out the lower frequency components, an initial stage of event detection results in a set of events, each of which consists of a brief voltage snapshot before and after the event time. If the electrode sites are sufficiently closely spaced (< 100 µm), single unit signals will generally appear on several neighboring channels. A central channel is automatically chosen for each event, corresponding to the channel on which the peak-to-peak voltage of the event is largest. Automated sorting starts by forming a single initial cluster for each electrode channel, consisting of all the events that were localized to that channel. A unit located midway between channels may give rise to spikes that are localized (perhaps randomly) to different channels: the clusters from these two sets of spikes will be identified as similar and merged at a later stage. The average waveform of the events in each initial cluster is then calculated. This is referred to as the cluster template. Subsidiary channels are assigned to each cluster based on the amplitudes and standard deviation of the template waveforms on each channel. Principal component values are then calculated for each cluster based on the waveforms on the assigned set of channels. The user can choose the number of principal component dimensions to use: usually 2 is sufficient. Each cluster is then split into a further set of clusters, and this is repeated until none can be further split by automated clustering.

At this point, an initial set of say, 64 clusters from a 64-channel electrode, may be split into two or three times that number, depending on the number of units that was present in the recording. But because of the variable assignment of events from single units to different channels, the number of clusters found at this stage is almost certainly larger than it should be. The next stage of sorting is to correct the oversplitting by comparing pairs of clusters and merging similar pairs or reassigning events from one to another. This stage of sorting is referred to as 'merge and split'.

Merging and Splitting

For N clusters, there are N*(N-1)/2 pairs and hence the number of pairs grows as N2, which is undesirable. However, many pairs can be excluded from the comparison because the two members of the pair are physically far apart. This reduces the dependence to something that is more linearly related to the number of channels. In spite of this shortcut, the merge and split stage can still be quite time consuming. It works in the following way. Each cluster pair that is to be compared (those that are physically close together, as judged by the overlap in the channel sets assigned to each) is temporarily merged, though keeping the identities of the spikes in the two member clusters known. The principal components of the merged pair are then calculated. A measure of the overlap between the points in the two clusters is calculated based on the distribution of the first two principal components.

The way the overlap measure is calculated is described in more detail elsewhere11. Its value is zero if the clusters do not overlap at all, i.e. the nearest neighbor of each point is in the same cluster. Its value is close to 1 if the clusters overlap completely, i.e. the probability of the nearest neighbor being in the same cluster is the same as that predicted from a uniform mixing of points.

Various decisions are made which take the overlap measure into account. If the overlap is greater than a certain value, clusters may be merged. If the overlap is very small, the cluster pair may be defined as distinct and left alone. Intermediate values, indicating incomplete separation of the cluster pair, may signal that the pair should be merged and then re-split, the desired outcome being a pair of clusters with less overlap. These procedures are run first in an automated stage and then in a manually guided stage.

In the automated stage, cluster pairs with a high overlap value are merged; then cluster pairs with intermediate to low overlap values are merged and re-split. In the second, user-guided stage, the user is presented with all the remaining ambiguous cluster pairs (i.e. those with overlap values in a defined intermediate range) in sequence and is asked to choose whether a) to merge the pair, b) merge and resplit the pair, c) to declare the pair to be distinct (which will override the significance of the overlap measure), or d) to define the relation between the pair as 'ambiguous' indicating that the spikes in the pair are unlikely to be well sorted. Various tools are provided to help with these decisions, including auto- and cross-correlograms and time series plots of spike height and PC values.

Ideally, at the end of the merging and splitting stages, every cluster should be distinct from all others, either because it has few or no channels in common with other clusters, or because the overlap index is less than a defined value. This value is user-selectable but is typically 0.1. Clusters (units) that pass this test are defined as 'stable', those that do not (because the overlap with one or more other clusters is greater than the threshold) are defined as 'unstable'. In practice, the great majority of units end up being defined as 'stable' at the finish of sorting, leaving the remainder to either be discarded or treated as potentially multi-unit.

Software Requirements

SpikeSorter is compatible with 64 bit versions of Windows 7 and Windows 10, and has also been run successfully under Linux using the Wine emulator. Data files are loaded completely into memory (for speed) hence available RAM needs to scale with the size of the recording (allow about 2 GB for the program itself). Electrophysiological data files larger than 130 GB in size have been successfully sorted in both Windows and Linux environments. Options are accessed through standard Windows menus, a toolbar and dialogs. The layout of items on the menu matches roughly the order of operations in sorting, beginning with the 'File' menu on the left for data input and the 'Export' menu on the right allowing for export of sorted data. Toolbar buttons provide shortcuts to commonly used menu items.

The Channel Configuration File

Many recording data formats do not store channel locations. However, knowing these is essential for spike sorting. Channels may also be numbered in various ways by acquisition software: SpikeSorter requires that channels are numbered in sequence, beginning with channel 1. Thus, an ancillary electrode configuration file has to be created that can remap channel numbers to follow the sequential rule, and to store channel locations. The channel configuration file is a text file with a single row of text for each channel. The first line of the file stores a text name, up to 16 characters long, that identifies the electrode. The numbers in subsequent lines can be separated by tabs, a single comma, or spaces. There are four numbers in each row providing (in order): the channel number in the file, the channel number to which it is to be mapped (i.e. the number that will be used by SpikeSorter), and the x and y coordinates of the channel, in microns. The x coordinate would normally be taken as perpendicular to the direction of electrode insertion and the y coordinate accordingly would be depth into the tissue. The configuration file has to be placed in the same directory as the recording file. There is some flexibility in how it can be named. The program will first search for a file that has the same name as the raw data file but with a .cfg extension. If that file is not found, it will search for the file 'electrode.cfg'. If that file in turn is not found an error message is generated to indicate a lack of channel layout information.

Protocol

1. Program Setup

  1. Go to http://www.swindale.ecc.ubc.ca/SpikeSorter to download the program. Copy the supplied executable file to the directory of your choice. Read the accompanying documentation.
    NOTE: No formal installation or compilation is required.
  2. Before opening any file to be sorted, ensure that there is enough free RAM to contain the entire duration of the recording. Also make sure a valid channel configuration file, as described in the documentation, is present in the same directory as the data file.
  3. Start the program, then go to 'File - Open' and select the recording file format from the drop-down list at the bottom right of the resulting open file dialog. Select the file to be opened, then click 'Open'.
  4. Once reading is complete, inspect the voltage recording display. Double click on the display (or go to 'View - Voltage Record') to bring up a dialog with controls that allow any part of the recording waveform to be viewed.
    NOTE: Double clicking on other display windows will often bring up associated dialogs.
  5. After the dialog is exited, hover the mouse over the waveforms to display particular voltage values in the top left corner of the display. Use the scroll wheel to zoom in on any part of the display. Hold down the left mouse button to drag the window contents.
    NOTE: This display is frequently updated to reflect the addition of newly detected events, or to indicate, by means of colors and/or numbers, their cluster assignments after clustering.
  6. If the recording is unfiltered and contains the local field potential, remove it by going to 'Pre-Process - Transform/filter' (or click on the filter icon in the toolbar). Select 'High-pass Butterworth Filter', then a suitable cut-off frequency and the number of poles, and then press 'Do-It!'. Once filtering is finished, inspect the new waveform in the voltage waveform window.
    NOTE: Filtering is done in the Fourier domain, is non-causal, and does not introduce phase distortion of the waveforms. For a long recording, filtering may take several minutes.
  7. Next, check for channels that may be faulty and need to be masked. Go to 'Pre-process - Channel check' (or click on the channel check icon) and then inspect the graph that appears. The graph shows the change in signal correlation between channel pairs as a function of their spatial separation5. Channels that violate this relation may not be functioning properly. To see any such outliers, click on 'single channel net deviations'.
    1. To mask an outlying channel either select the channel number, or select it from the problem list. When this dialog is exited, click on 'Yes' at the prompt to save the mask values.
      NOTE: This file will have the same name as the recording data file but with the extension .msk. It will be read automatically whenever the same data file is opened.

2. Event Detection

  1. Go to 'Pre-process - Event Detection' to bring up the event detection dialog (Figure 1). This dialog also offers the option of masking channels based on their noise levels (though these will often be detected by the previous tests). For example, a channel that has been intentionally grounded may have a very low noise level.
  2. Use the slider on the top right to inspect the noise level on particular channels. Careful inspection of the voltage display may also reveal silent or unusually noisy channels that need to be masked.
  3. Choose a thresholding method for event detection. Use the help button in the group box for more information about the options. 'Variable' thresholding, with a threshold of 4.5X - 6X noise7, is recommended. Use the controls on the top left to choose how the noise level is calculated for this purpose.
  4. Choose the detection method from the drop-down list. 'Dynamic multiphasic filter' is the recommended method. This requires specification of a temporal window. Set the window to be roughly half the width of a typical spike. Very narrow values will bias detection to narrower spikes though the effect is not large. Values in the range 0.15 - 0.5 ms are recommended12.
    NOTE: The values displayed are in integer multiples of the sampling interval (reciprocal of the sampling frequency).
  5. Select the alignment method. Choose the option that best identifies a single, temporally localized feature of the spikes that are being sorted, e.g., a 'positive peak' may be a poor choice if many spikes have more than one positive peak. For many recordings, a 'negative trough' will be the best choice. Other options can usually be left at their default values. Press 'Start'.
    NOTE: Event detection may take from several seconds to several minutes, depending on the length of the recording and the number of channels.
  6. Press 'Done' to exit the dialog. Inspect the events, shown in grey, in the voltage waveform window. Check that signals that look like events have been detected.
    1. If not, consider re-running event detection with a lower detection threshold. Beware however that very low amplitude spikes may be difficult to sort and that large numbers of them may impede sorting of larger amplitude spikes. Also check for obvious duplicates or a failure to resolve nearby spikes and adjust spatio-temporal lockout window parameters accordingly.
      NOTE: At this stage events are identified by their times of occurrence and a channel number. Normally this is the channel on which the peak-to-peak amplitude of the spike is largest. The events are unclustered initially, so each has a cluster assignment of zero.

3. Sorting

NOTE: The next step is not normally performed before routine sorting, but it is very useful to do it when sorting for the first time, or when encountering unfamiliar data.

  1. Go to 'Sort - Convert channels to clusters'. This creates a single cluster for each unmasked electrode channel, assuming that each channel has some events assigned to it. Examine these clusters by going to 'Review - View Clean and Split clusters'. This brings up another dialog (Figure 2). Use the spin control (top left) to select the cluster to be viewed.
    NOTE: The solid blue (cyan) line is the average of all the waveforms in the cluster and is referred to as the cluster template in what follows. The principal components (PC) distribution of the events in the cluster are shown in the window below. These will often reveal the presence of two or more subclusters.
  2. Press the 'realign' button to change the time of each event (resulting in small sideways shifts of the waveforms in the display) so as to better match it to the shape of the template, doing this often makes subclusters more compact and distinct, and sometimes reduces the apparent number (Figure 3).
  3. Select a cluster that has two or more distinct subclusters and press 'Autosplit'. If subclusters are identified in the PC display, they will be colored. As an exercise, use one of the small 'split' buttons to create a new cluster and examine it. Sorting could continue manually this way, but instead go back and use the faster autosort procedure.
  4. Go to 'Sort - Autosort' (or press the autosort button on the toolbar) to begin automatic sorting. The resulting dialog is shown in Figure 4. It presents a variety of options.
    1. Leave the 'skip event detection' option checked if event detection has already been done. If it is not checked, event detection will be run using parameter values and choices inherited from the event detection dialog. Since event detection has already been done, leave that option checked.
    2. In the 'clustering' panel below, select a temporal window large enough to contain the entirety of the spike waveform preceding and following the alignment point, but no more. Use this window to block out regions of the spike waveform, e.g. long variable afterpotentials, if they appear to be interfering with (or contributing little to) sorting. Usually values in the range ± 0.5 ms are appropriate. Like other temporal windows, the window is an integral number of sample points, so the temporal values that appear are multiples of the sampling interval.
    3. Next, select a realignment option to be used during clustering. This will make use of the template waveform and works more robustly than in the initial case of event detection where the criterion has to be applied to relatively noisy individual spike waveforms. The recommended option is 'peak-weighted c.o.g.' but 'negative trough' may be better if that is a consistent feature of the spike waveforms.
    4. Choose a minimum cluster size. Clusters with less than that number of spikes will be deleted, preventing the accumulation of large numbers of small, possibly spurious, clusters during sorting.
    5. Decide on the number of dimensions in PC space that will be used for clustering. Two is generally adequate but slightly better results may be obtained with 3, albeit with a longer sorting time.
    6. Leave the other options at their default settings. Use the Help buttons to get more detailed explanations of the various options.
  5. Press 'Start' to begin the autosort. Channel based clusters are first formed as illustrated in step 3.1. These are now processed in turn, forming new clusters by splitting off individual sub-clusters, one at a time. Each time a new cluster is split off, the PC values are recalculated and displayed. This continues until no individual cluster can be further split.
  6. Follow the prompts in the display, where the subcluster that will be split off from the parent cluster is shown in red.
    NOTE: Occasionally the final cluster is red with uncolored outliers that do not form a distinct subcluster. These outliers will usually be deleted. During this process the number of clusters gradually increases. When it is finished, cluster overlap indices are calculated for every eligible cluster pair. Pairs that have large overlap values are automatically merged, while pairs that have intermediate overlap values (the default range is 0.1 to 0.5) are merged and then resplit. Intermediate values suggest that there are two distinct clusters but that some points are misassigned. During this stage the number of clusters typically decreases and the number of stable clusters increases.

4. Customization

  1. If using the program for the first time (or possibly during the next step), customize window sizes and positions. Go to 'File-Preferences'. Choose sizes for the various windows by selecting the window type from the pull-down list and adjusting the size to suit the screen. Exit the dialog and position the windows to make best use of the screen.
  2. From the dialog, choose scaling values that best suit the layout and spacing of the channels on the electrode and the spikes in the recording. There is an autoscaling option but this may not always choose the best values. Turn it off if it does not.
  3. Check the Sticky Parameters option: if the option is selected, changes in sorting parameter values (e.g. as used in event detection) will be saved and inherited next time the program starts. This can be useful but also requires that parameter values be checked to ensure they have not been carelessly changed as various options are explored or as a result of reading in different work files. Options for changing sub-cluster colors are also available.
  4. Exercise care changing the number of processor threads. The optimal number is usually 1 less than the number of physical (not virtual) CPU cores. Increasing the number of threads may not speed up processing and can even result in a severe slowdown.

5. Merge and Split

  1. After the autosort is complete, press 'Next' to go to the manually guided merge and split stage. The resulting dialog shows, in the lower left corner, the number of remaining ambiguous cluster pairs that need to be examined as well as the number of stable clusters.
  2. Press 'Begin'. Another dialog appears together with the first of the pairs to be examined.
  3. Choose whether to merge the pair, resplit it (resulting in a lower overlap value), to label the pair as 'distinct', meaning that the value of the overlap index will be ignored, or to label the pair as 'ambiguous', meaning that it is considered uncertain whether the spikes are from the same or different units.
    1. Click on the check boxes to display a graph of spike parameters (peak-to-peak (P-P) height, or the first (PC1) or second (PC2) of the principal components) vs. time, and/or auto and cross-correlation histograms.
      NOTE: The display of P-P height vs. time is often very useful in deciding whether to merge two clusters. If the heights of the spikes in one unit blend smoothly into those of another at the same time that one unit stops firing and the other begins it is much more likely than not that they are the same unit and should be merged. Cross-correlograms may reveal a strong temporal relationship between the spike times in two clusters. If the cross-correlogram has a strong, asymmetric peak at a very short time interval (e.g. around 5 - 10 ms) and especially if the second spike is smaller than the first, the two units are most likely a single unit which is firing spike pairs in which the second is smaller than the first because of Na+ channel adaptation.
    2. In cases where the decision to merge is not easy, label the pair as 'ambiguous' and treat the clusters accordingly in subsequent analyses.
  4. If the merge and split option is unable to find clearly separable clusters, use the slider in the prompted dialog to manually vary a clustering parameter (a spatial bandwidth, sigma), together with the set of merging buttons, to find a split that looks satisfactory. Use the 'Revert' button to go back to the original state of the two clusters. Press 'Split as shown' to finish. Note more than two clusters can be produced by this procedure.
  5. Continue with this process until there are no more pairs to inspect. The great majority of clusters should now be listed as 'stable'.
  6. If some cluster pairs have very low overlap indices, so that they are ignored by the guided merge (but there is still evidence for merging them), go to the 'Review - Compare cluster pairs' menu option (or click on the associated icon in the toolbar) and open the dialog shown in Figure 5. Use the spin controls at the top of the dialog to select any pair of clusters for comparison.
    NOTE: As with the guided merge and split, pairs are put into a sorted list, but in this case comparison metrics additional to the cluster overlap index are available.
    1. Select the 'normalized dot product' option from the pull-down list. This calculates the correlation between the template values. It is insensitive to multiplicative scaling variations and is well suited to picking out cluster pairs that are an artefactual result of peak-to-peak height variability.
    2. Press the 'Most similar' button in the middle of the dialog to display the most similar pair. Use the horizontal spin control beneath the button to go forward or backward through the list. Use the correlation display and the P-P height vs. time display to make merging decisions, just as for the user guided merge and split. Note that the list is recalculated after each merging operation. This comparison stage is open ended, and it is up to the user to decide how extensively to search for evidence in favor of merges.

6. Review – Post-processing

  1. Now go to 'Review - Post-processing' (or click the appropriate toolbar icon). This dialog (Figure 6) offers options to add or remove events from clusters, as well as the option of deleting entire clusters with signal-to-noise ratios (SNRs) that fall below a threshold. Duplicate events (events occurring at the same time in a cluster) can be created by alignment errors during sorting. Events that are a long way removed from their original location can sometimes be relocated; they can also be removed when relocation does not work.
  2. Use the alignment cleaning button to remove events from clusters that are a bad match to the template. Use the 'Recluster' button to do the reverse, i.e. to reassign unclustered events that are a good match to a particular template. The reclaimed events are marked as a subcluster of each parent cluster and can be inspected using the 'View, clean and split clusters' dialog. These events will remain in the cluster (and be exported as such) unless they are deleted (use the small 'delete' button for the first subcluster). Returning to the post-processing dialog, use the 'delete' button and the spin control next to it to delete clusters with a SNR less than the selected threshold.
  3. Although cluster numbers go continuously from 1 to N, where N is the total number of clusters, the actual numbering of clusters at the end of sorting is close to arbitrary. Use the 'Sort' button to renumber the clusters according to a chosen criterion, e.g. vertical position on the electrode, or channel number. Note that, with the exception of the deletion of duplicate events, there is currently no objective evidence to support particular choices in this dialog as being better than others.
  4. At any stage during the manual procedures of the sort it is possible to save a file which contains current parameter values, sorting options, event times, cluster properties and the message record. Create this file by going to 'File - Save work file'. Give the file a name that is clearly related to that of the data file and press 'Save'. Resume sorting at a later time by first opening the original recording file, followed by high-pass filtering (if done initially). Then, open the saved work file. The program will then be in a state identical to the one it was in when the work file was saved. The work file is also a record of how the sorting was done - the parameters used and of the messages issued during sorting.
  5. Finally, export the clustered events. Go to 'Export - Sorted spike files' (or click on the relevant button the toolbar). Select '.csv file' (comma separated variable) from the dropdown list then click on 'Save as'. Choose a name for the file that will contain the exported csv data for the sorted units.
    NOTE: This text file will have a single line for each event containing, in order, the time of the event (in seconds to the nearest 10 µs), the cluster number (from 1 upwards) and the number of the channel that was assigned to the event. Note that the assigned channel may not be the same for all the events in a cluster if the events were not consistently larger on one particular channel.

Results

Figure 7 shows the display (obtained by going to 'View - Sorted waveforms') for a typical sorted recording. The default view option is just to show the waveforms on the center channel for each cluster. A common experience is that waveforms for a cluster pair on the same channel look identical, but when the 'Compare pairs' dialog is used to examine the two clusters there are distinct clusters in the PC projection, most often resulting from waveform differen...

Discussion

File Formats

Currently supported file formats include Neuralynx (.ntt and .ncs), Plexon (.plx), Neuroscope (.xml + .dat), MultiChannel Systems (.mcd), Blackrock (.nev) and Intan (.rhd). For unsupported formats, there are two options. One is to request addition of the file format to an upcoming release (an email link to the developer is provided in the 'Help - About' dialog). The other is to convert the file to a supported format. A simple option is to use the time-spike-format '.tsf&#...

Disclosures

The authors have nothing to disclose.

Acknowledgements

We thank those individuals and groups who have used SpikeSorter and who have provided requests for file format support and suggestions and feedback on how to improve it. These include Youping Xiao, Felix Fung, Artak Khachatryan, Eric Kuebler, Curtis Baker, Amol Gharat and Dongsheng Xiao. We thank Adrien Peyrache for the false positive and negative figures given in 'Representative Results'.

Materials

NameCompanyCatalog NumberComments
spikesorter.exeN/Ahttp://www.swindale.ecc.ubc.ca/SpikeSorter

References

  1. Buzsáki, G. Large-scale recording of neuronal ensembles. Nat. Neurosci. 7, 446-451 (2004).
  2. Blanche, T. J., Spacek, M. A., Hetke, J. F., Swindale, N. V. Polytrodes: High Density Silicon Electrode Arrays for Large Scale Multiunit Recording. J. Neurophys. 93, 2987-3000 (2005).
  3. Lewicki, M. S. A review of methods for spike sorting: the detection and classification of neuronal action potentials. Network. 9, R53-R78 (1998).
  4. Letelier, J. C., Weber, P. P. Spike sorting based on discrete wavelet transform coefficients. J. Neurosci. Methods. 101, 93-106 (2000).
  5. Quiroga, R. Q., Nadasdy, Z., Ben-Shaul, Y. Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering. Neural Computation. 16, 1661-1687 (2004).
  6. Franke, F., Natora, M., Boucsein, C., Munk, M., Obermayer, K. An online spike detection and spike classification algorithm capable of instantaneous resolution of overlapping spikes. J. Comput. Neurosci. 29, 127-148 (2010).
  7. Jäckel, D., Frey, U., Fiscella, M., Franke, F., Hierlemann, A. Applicability of independent component analysis on high-density microelectrode array recordings. J. Neurophysiol. 108, 334-348 (2012).
  8. Rossant, C., et al. Spike sorting for large, dense electrode arrays. Nature Neuroscience. 19, 634-641 (2016).
  9. Vandecasteele, M., et al. Large-scale recording of neurons by movable silicon probes in behaving rodents. JoVE. (61), e3568 (2012).
  10. Schjetnan, A. G. P., Luczak, A. Recording large-scale neuronal ensembles with silicon probes in the anesthetized rat. JoVE. (56), e3282 (2011).
  11. Swindale, N. V., Spacek, M. A. Spike sorting for polytrodes: a divide and conquer approach. Frontiers in Systems Neuroscience. 8, 1-21 (2014).
  12. Swindale, N. V., Spacek, M. A. Spike detection methods for polytrodes and high density microelectrode arrays. J. Comput. Neurosci. 38, 249-261 (2015).
  13. Swindale, N. V., Spacek, M. A. Verification of multichannel electrode array integrity by use of cross-channel correlations. J. Neurosci. Meth. 263, 95-102 (2016).
  14. Fukunaga, K., Hostetler, L. D. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Transactions on Information Theory (IEEE). 21, 32-40 (1975).
  15. Mitelut, C., et al. Standardizing spike sorting: an in vitro, in silico and in vivo study to develop quantitative metrics for sorting extracellularly recorded spiking activity. Soc. Neurosci. Abstr. 598 (10), (2015).
  16. Hazan, L., Zugaro, M., Buzsáki, G. Klusters, NeuroScope, NDManager: A free software suite for neurophysiological data processing and visualization. J. Neurosci. Meth. 155, 207-216 (2006).
  17. Harris, K. D., Henze, D. A., Csicsvari, J., Hirase, H., Buzsáki, G. Accuracy of tetrode spike separation as determined by simultaneous intracellular and extracellular measurements. J. Neurophysiol. 84, 401-414 (2000).
  18. Anastassiou, C. A., Perin, R., Buzsáki, G., Markram, H., Koch, C. Cell-type and activity dependent extracellular correlates of intracellular spiking. J. Neurophysiol. 114, 608-623 (2015).
  19. Wohrer, A., Humphries, M. D., Machens, C. K. Population-wide distributions of neural activity during perceptual decision-making. Prog. Neurobiol. 103, 156-193 (2013).
  20. Mizuseki, K., Buzsáki, G. Preconfigured, skewed distribution of firing rates in the hippocampus and entorhinal cortex. Cell Reports. 4, 1010-1021 (2013).
  21. Schmitzer-Torbert, N., Jackson, J., Henze, D., Harris, K., Redish, A. D. Quantitative measures of cluster quality for use in extracellular recordings. Neuroscience. 131, 1-11 (2005).
  22. Barnett, A. H., Magland, J. F., Greengard, L. F. Validation of neural spike sorting algorithms without ground-truth information. J. Neurosci. Meth. 264, 65-77 (2016).

Reprints and Permissions

Request permission to reuse the text or figures of this JoVE article

Request Permission

Explore More Articles

Spike SortingElectrophysiologyNeural RecordingsMultielectrode ArraysSpikeSorterSignal ProcessingFilteringEvent DetectionData VisualizationUser friendly Interface

This article has been published

Video Coming Soon

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright © 2025 MyJoVE Corporation. All rights reserved