General-purpose tool to generate fast c++ code for integrating neuron models. Models are specified in a YAML document that contains the equations of motion in symbolic form and values for all of the parameters and state variables. The model may also include a reset rule. Values are specified with physical units for dimensional analysis and reduced errors from unit mismatches.

Hosted on GitHub: melizalab/spyks

Chen AN, Meliza CD (2017). Phasic and Tonic Cell Types in the Zebra Finch Auditory Caudal Mesopallium. J Neurophys, doi:10.1152/jn.00694.2017

dynamical state and parameter estimation


Example code for dynamical state and parameter estimation of a biophysical neuron model. Given intracellular voltage measurements and a dynamical model, estimates the gating variables, kinetic parameters and conductances of the model.

Hosted on GitHub: melizalab/dpse-example

Meliza CD, Kostuk M, Huang H, Nogaret A, Margoliash D, Abarbanel HDI (2014). Estimating parameters and predicting membrane voltages with conductance-based neuron models. Biol Cybern, doi:10.1007/s00422-014-0615-5



JILL is a realtime system for auditory behavioral and neuroscience experiments based on the [JACK audio framework](http://jackaudio.org/). It consists of several independent modules that handle stimulus presentation, vocalization detection, and data recording. With JILL, you can:

Hosted on GitHub: melizalab/jill



Simple threshold-based spike detection, implemented in cython.

Hosted on GitHub: melizalab/quickspikes



Tools for measuring pitch and comparing vocalizations. Features include:

  • A graphical interface for examining conventional and time-frequency reassignment spectrograms of recordings, and for labeling temporal and spectrotemporal regions of interest
  • A novel signal processing algorithm for tracking the pitch (fundamental frequency) in noisy bioacoustic recordings
  • Dynamic-time-warping and cross-correlation algorithms for comparing batches of vocalizations against each other using pitch or spectrum.

Hosted on GitHub: melizalab/chirp (website)

Keen S, Meliza CD, Pilowsky J, Rubenstein DR (2016). Song in a Social and Sexual Context: Vocalizations Signal Identity and Rank in Both Sexes of a Cooperative Breeder. Front Ecol Evol, doi:10.3389/fevo.2016.00046

Keen SC, Meliza CD, Rubenstein DR (2013). Flight calls signal group and individual identity but not kinship in a cooperatively breeding bird. Behav Ecol, doi:10.1093/beheco/art062

Meliza CD, Keen SC, Rubenstein DR (2013). Pitch- and spectral-based time warping methods for comparing field recordings of harmonic avian vocalizations. JASA, doi:10.1121/1.4812269



Time-frequency reassignment is a technique for sharpening spectrographic representations of sounds (see figure). For relatively simple, non-stationary processes, TFR can provide substantial improvements in detecting fine structure. Further improvements can be realized by using multiple windowing functions (similar to the multitaper method for calculating the spectra of stationary processes), but the algorithms are computationally intensive. libtfr is a library for calculating multitaper TFR spectrograms which is implemented in C and uses the highly efficient FFTW library. It also supports calculating standard multitaper spectrograms and spectra, and comes with python/numpy and MATLAB interfaces.

Hosted on GitHub: melizalab/libtfr (website)



Vocalizations often consist of spectrotemporally disjoint, discrete elements, and it's often desirable to extract and manipulate these elements separately. However, because they often overlap temporally, the separation can only be achieved in the spectrotemporal domain. znote is a software package for identifying components in bioacoustic signals and extracting the sound pressure waveforms associated with them.

Hosted on GitHub: melizalab/znote

Meliza CD, Chi Z, Margoliash D (2010). Representations of Conspecific Song by Starling Secondary Forebrain Auditory Neurons: Towards a Hierarchical Framework. J Neurophys, doi:10.1152/jn.00464.2009