Tuesday, January 12, 2016

MScan 2.3 and MView 3.2

The new version of MCS, which includes MScan 2.3 and MView 3.2. has been sent for beta-testing. Aside from improvement in performance and minor bug fixes, the two prgrams introduce new scripting features which will facilitate near real-time processing of imaging data, even in resonant scanning mode.

MScan 2.3 can now export frame data during the OnNewFrames event by using the GetFrameArray method, which returns a variant array which contains the latest frame values of a specific imaging channel. Incoming data can be piped into a workspace in MView for display and processing. What is noteworthy is the possibility to jump outside the boundaries of MScan to pass data to another program in a time-efficient manner. In principle, the result of GetFrameArray should be compatible with Matlab’s PutWorkspaceData


The next release of MScan will have a version number matching that of MView. We plan to support additional hardware. In particular, we intend to make MScan compatible with a high-speed, low-cost USB 3.0 video camera for synchronous behavioral recordings. 

Wednesday, September 23, 2015

MScan 2.2 and MView 3.1 in beta testing.

MScan 2.2 and MView 3.1 are in beta testing at Sutter Instrument.

This release introduces episodic acquisition, a feature available in resonant scanning mode.

MScan 2.2 is now able to simultaneously acquire multiphoton frames with synchronized analog data and video streaming images into stretches called "episodes". Episodic imaging is first armed manually then automatically started by a TTL signal going high. Episodes are stopped automatically either when a pre-set number of imaging frames has been reached or when the TTL signal goes down. Subsequent episodes are re-triggered by the TTL line. Live display of two-photon imaging channels, analog channel voltages and video stream is turned off when episodes are not acquired. In addition, the Pockels cell modulating the ultrafast laser output is set to a minimum between episodes, thus preventing overexposure of the sample during the experiment.


The main advantage of running an experiment using episodic acquisition is to reduce the amount of data to analyze. What makes episodic acquisition unique in MScan is the fact that episodes are sequentially stored in separate "folders" in the same data file. In practice, there is no limitation in the size of each episode, allowing many arbitrarily long bouts of data (i.e., > 4 GB) to be  recorded in one data file. Because the starting time of each episode is written to the file with a 1 ms accuracy, data analysis of discontinuous periods of imaging is greatly facilitated. MView and MCSX have been enhanced to easily handle episodic files.

Saturday, August 9, 2014

MScan 2.1. Close-loop scripting benchmarks

MScan 2.1 brings new scripting capabilities to easily develop and run close-loop experiments based on functional imaging with multiphoton microscopy. Particular attention was devoted to optimizing the response time of scripting functions.

The StartDigitalTrain and StartAnalogTrain functions introduced in the previous post are fast enough to be used in non-resonant scanning experiments, but with resonant scanning we recommend using the new scripting methods SetDigitalTrain, SetAnalogTrain, FireDigitalTrain and FireAnalogTrain.

SetDigitalTrain and SetAnalogTrain are quite similar to their counterparts StartDigitalTrain and StartAnalogTrain with the exception that they do not output the train. SetDigitalTrain and SetAnalogTrain can be used for instance in the On Scanning Starts event to set up and arm either a digital or an analog train. Train delivery can be done with minimal latency using FireDigitalTrain or FireAnalogTrain, for instance when processing ROIs in the On New Frames event handler. If an analogy with electronics can be made, the pairs of scripting functions SetDigitalTrain/FireDigitalTrain and SetAnalogTrain/FireAnalogTrain are similar to non-retriggerable, one-shot train stimulators. Resetting the stimulators is automatically performed when the imaging run terminates, regardless of whether the train was delivered or not.

We performed initial benchmarks to measure and optimize the latency of script-triggered delivery of digital trains in resonant scanning mode, which is arguably the most demanding situation to run scripts driven by frame acquisition. The latencies result from delays to:

(1) get the imaging data from the acquisition board to memory,
(2) process frame data (mostly reordering pixels into an image, adding offsets etc…),
(3) signal threads that frame data is available,
(4) activate the scripting interpreter,
(5) execute the user-written code in the On New Frames handler
(6) and deliver the digital train;

The tests were conducted in resonant scanning mode using a protocol involving simultaneous acquisition and real-time display of two imaging channels and two analog channels with each analog channel sampled at 20 kHz. The first analog channel recorded the motion of the “slow” Y mirror to give accurate timing of frame acquisition while the second analog channel was connected to the digital output of the analog board. MScan was set to process every frame individually rather than in batches as soon as frames arrived.

The script executed SetDigitalTrain in the On Scanning Starts event handler to set up a train consisting of ten pulses of 10 ms duration each. The On New Frames executed the user-written code:

If CurrentFrameIndex = 20 Then FireDigitalTrain

to deliver the digital train immediately after the 20th imaging frame was scanned and processed. 


With 512 x 512 frames sampled at 31 fps, the latency from the mirror position at the end of the 20th frame to the beginning of the pulse train ranged from 15 to 17 ms. In comparison, the frame interval was 32 ms. In other words, MScan can react asynchronously to run user-written scripts and deliver stimuli in less time than the sampling interval of two-photon imaging, even in resonant scanning mode. Similarly, we obtained a near real-time latency of only 8-10 ms with frames sampled at 94 fps (512 x 170 frames). We found that most of the latencies in both cases were due to waiting for the device driver of the acquisition board to complete Step 1 described above.

We are looking forward to hear from scientists running cutting-edge, script-based close-loop experiments with the MOM and MCS.



Friday, July 25, 2014

MScan 2.1. New feature: close-loop experiments

Two-photon close-loop experiments with MScan 2.1.


MScan can react in near real time to changes in image intensity to trigger a response. This feature opens the unique possibility to perform close-loop experiments either in brain slices or even in live animals.  Custom code can be easily written using the built-in scripting environment to perform specific actions when certain cells in the field of view become more active. These actions can include for instance the delivery of an analog or a digital stimulation pattern. Scripts allow developing complex experiments easily - you can imagine triggering an optogenetic stimulation when the difference in activity between two or more cells reaches a threshold. It might be even possible to obtain the phase-response curve of an entire neural network imaged by two-photon microscopy. These experiments can be programmed in scripts by using a combination of:

The OnNewFrames event. The OnNewFrames scripting event is triggered each time one or more frames have been received. In MScan 2.1, special care has been taken to ensure that OnNewFrames is called with the lowest latency possible. The code associated with OnNewFrames runs in its separate thread to ensure that it does not slow down frame acquisition while taking advantage of the multicore capability of the processor powering the the MScan computer. The CurrentFrameIndex property returns the current frame index, allowing for instance specific code to be run after waiting for a certain number of frames. A new protocol setting called Process Frames can be set to At Every Frame to ensure that OnNewFrames is called each time a new frame has been acquired.

The ROIs property. Intensity data from Regions Of Interest (ROIs) are now available in scripts in real-time. Arbitrary rectangular, elliptical or polygonal regions can be created around cells in the Viewer window. Intensity values can be used in scripts with the ROIs property, which returns the pixel intensity of a specific ROI averaged across all pixels of the ROI for the latest frame and those preceding it. ROIs(3, 0) will return the intensity of ROI number 3 in the latest frame acquired (0 is the index of the most recent frame in the ROIs buffers).

Analog and digital stimulation. Scripts can now generate analog or digital trains without having to use the Stimulator object. The StartAnalogTrain or StartDigitalTrain methods can be used to specify and deliver analog or digital train in scripts, for instance during the OnNewFrames event. Analog trains can use any of two analog outputs available on the PCIe-6321 board, while digital trains can be delivered on any of the eight digital input/output lines of the board. This gives plenty of flexibility to trigger either inhibitory or excitatory stimuli depending on the functional response of the network imaged by two-photon microscopy. A typical train consists of a delay and a series of one or more identical pulses, each composed of two steps with independent duration and value. For instance, StartDigitalTrain 1000000, 255, 15000, 0, 5000, 100 will deliver 100 pulses consisting first of  a step setting all 8 of the digital lines to 1 (11111111 in binary = 255) lasting 15000 microseconds (i.e., 15 ms) followed by a step where all the lines are back to 0 (000000000 in binary = 0) for a duration of 5 ms (5000 microseconds). The step is preceded by a delay of 1 s (1000000 microseconds) with all lines set to 0. StartAnalogTrain delivers a train consisting of analog values from -10 V to + 10 V on one of two possible analog output channels.

Putting all these elements together in a script is straightforward. In the OnNewFrames code editor, the line:

If ROIs(1, 0) > 1500 Then StartDigitalTrain 0, 255, 10000, 0, 10000, 100

will deliver a digital train each time ROI number 1 will be above a value set to 1500.



We hope far-reaching experiments will be possible with the script-based close-loop features of MScan.

MScan 2.1. New feature: behavioral camera

Concurrent video streaming.

Cutting-edge multiphoton experiments now deal with imaging neurons in awake animals performing a behavioral task. Observing animal actions and timing them accurately with respect to functional imaging is of paramount importance to correlate behavior with neural activity. MCS is well-suited for these kinds of experiments. Information from sensors for touch or licking behavior can be displayed and recorded using the eight analog channels available on the analog input board, a National Instruments PCIe-6321 board which is capable of sampling data at up to 250 kHz.

In addition, MScan 2.1 now supports a second, optional video camera dedicated to filming behavior during scanning. The first camera, also optional, is used for focusing the microscope. The cameras, which are identical, are designated as CSFV90BC3-B from their manufacturer, Toshiba-Teli. They output VGA-style black and white frames with 256 levels of gray (i.e., 640 x 480 pixels with an 8-bit resolution).

MScan 2.1 takes advantage of the fact that Toshiba-Teli FireDragon cameras can be triggered externally, a feature that is seldom found on cameras in that price range. This capability allows accurate hardware synchronization of frames from the camera with those taken by the MOM. Because CSFV90BC3-B  cameras can acquire frames at a rate of up to 90 fps, video streaming can be performed simultaneously even with resonant scanning. Internal frame buffering inside the camer and computer interfacing through IEEE 1394B (i.e., FireWireB), which guarantees data transfer at up to 800 Mbit/s ensure loss-free streaming. In MScan, the video stream is displayed live during two-photon imaging and is stored in parallel with the two-photon imaging frames in the same data file to facilitate analysis with MView. A custom cable is used to convey the Frame Sync signal from the computer to the camera. Initial testing shows concurrent video and two-photon display and streaming with two imaging channels at close to 90 fps.

MView 2.1 has been enhanced to open and display the video stream stored in data files synchronously with the two-photon images. Multiple video stream windows can be opened simultaneously to ease frame-to-frame comparisons. The video stream can be converted into an AVI movie or individual frames can be saved as bitmap files.

MScan 2.1. Enhancements in ROIs

Enhancement of ROI handling

Plotting the intensity of regions of interest (ROIs) in real-time, especially during resonant scanning is one of the most useful features of MScan when performing functional imaging with calcium indicators or other functional dyes. ROIs can be drawn directly on features displayed in the Viewer window as ellipses, rectangles or polygons.

With MScan 2.1, individual ROIs can be moved pixel-by-pixel with with keystrokes (Shift+1,2,3, or 4) or deleted, while all ROIs can be saved or recalled from a *.ROI file which can be used by MView to load ROIs.

ROIs are a key feature of two-photon close-loop experiments (see later post).

MView 2.1. New Features: Frame and line shifting

Manual shifting of images.

One issue with functional imaging is motion correction. This procedure can be handled automatically in MView with the Motion Correction algorithm. However, on occasion, one needs to adjust " by hand" the position of individual frames relative to other frames by shifting pixels in the X or the Y position. MView 2.1 allows users to perform this correction with the Ctrl+1, 2, 3 or 4 keystroke combination to move frames pixel by pixel. The modified frames can be reviewed then saved into a new data file. The modified frames can be reviewed then saved into a new data file.

Scan line aligment.

Another related feature of MView 2.1. is the possibility to cancel or reduce scanning artifacts due to misalignment of scan lines, often seen as “hairs” on images after acquisition. Pixel shifts, which are used to move odd lines horizontally with respect to even scan lines, can be fractional (e.g., ‘4.3’). The integer value of the shift (i.e., 4) moves odd lines by four pixels, while the fractional value of the shift is used to interpolate the values of consecutive pixels in odd lines to provide a sub-pixel positional adjustment. Contrary to manual shifting, scan line alignment applies the same shift to all frames in a file. The modified frames can be reviewed then saved into a new data file.